Feb 16 12:31:33 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 12:31:33 crc restorecon[4675]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:33 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:31:34 crc restorecon[4675]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 12:31:34 crc kubenswrapper[4799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:31:34 crc kubenswrapper[4799]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 12:31:34 crc kubenswrapper[4799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:31:34 crc kubenswrapper[4799]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:31:34 crc kubenswrapper[4799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 12:31:34 crc kubenswrapper[4799]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.867719 4799 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880161 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880214 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880221 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880228 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880234 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880241 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880247 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880252 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880257 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880264 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880271 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880280 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880289 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880298 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880304 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880309 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880314 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880319 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880333 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880338 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880343 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880350 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880356 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880362 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880368 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880375 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880381 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880386 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880391 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880396 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880401 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880406 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880411 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880418 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880424 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880430 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880435 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880443 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880448 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880453 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880457 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880463 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880467 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880473 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880479 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880484 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880489 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880493 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880498 4799 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880503 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880508 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880513 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880518 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880523 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880528 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880535 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880540 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880545 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880550 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880555 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880559 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880564 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880569 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880573 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880578 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880583 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880588 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880593 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880597 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880602 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.880607 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881589 4799 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881608 4799 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881623 4799 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881631 4799 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881639 4799 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881648 4799 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881656 4799 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881663 4799 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881669 4799 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881675 4799 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881681 4799 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881689 4799 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881695 4799 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881701 4799 flags.go:64] FLAG: --cgroup-root="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881706 4799 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881712 4799 flags.go:64] FLAG: --client-ca-file="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881718 4799 flags.go:64] FLAG: --cloud-config="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881723 4799 flags.go:64] FLAG: --cloud-provider="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881729 4799 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881737 4799 flags.go:64] FLAG: --cluster-domain="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881742 4799 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881748 4799 flags.go:64] FLAG: --config-dir="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881753 4799 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881759 4799 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881766 4799 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881772 4799 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881777 4799 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881784 4799 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881790 4799 flags.go:64] FLAG: --contention-profiling="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881796 4799 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881801 4799 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881807 4799 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881813 4799 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881820 4799 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881826 4799 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881831 4799 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881836 4799 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881844 4799 flags.go:64] FLAG: --enable-server="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881849 4799 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881856 4799 flags.go:64] FLAG: --event-burst="100" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881863 4799 flags.go:64] FLAG: --event-qps="50" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881869 4799 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881874 4799 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881880 4799 flags.go:64] FLAG: --eviction-hard="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881887 4799 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881892 4799 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881898 4799 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881904 4799 flags.go:64] FLAG: --eviction-soft="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881910 4799 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881916 4799 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881921 4799 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881927 4799 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881932 4799 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881938 4799 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881943 4799 flags.go:64] FLAG: --feature-gates="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881950 4799 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881956 4799 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881962 4799 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881968 4799 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881974 4799 flags.go:64] FLAG: --healthz-port="10248" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881981 4799 flags.go:64] FLAG: --help="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881987 4799 flags.go:64] FLAG: --hostname-override="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881993 4799 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.881998 4799 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882004 4799 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882010 4799 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882015 4799 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882021 4799 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882027 4799 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882035 4799 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882041 4799 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882046 4799 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882052 4799 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882058 4799 flags.go:64] FLAG: --kube-reserved="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882064 4799 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882069 4799 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882075 4799 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882080 4799 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882086 4799 flags.go:64] FLAG: --lock-file="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882092 4799 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882097 4799 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882103 4799 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882118 4799 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882146 4799 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882152 4799 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882158 4799 flags.go:64] FLAG: --logging-format="text" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882164 4799 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882171 4799 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882176 4799 flags.go:64] FLAG: --manifest-url="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882182 4799 flags.go:64] FLAG: --manifest-url-header="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882190 4799 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882196 4799 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882202 4799 flags.go:64] FLAG: --max-pods="110" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882208 4799 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882214 4799 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882220 4799 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882226 4799 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882231 4799 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882237 4799 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882243 4799 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882258 4799 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882265 4799 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882271 4799 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882276 4799 flags.go:64] FLAG: --pod-cidr="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882282 4799 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882290 4799 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882296 4799 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882302 4799 flags.go:64] FLAG: --pods-per-core="0" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882307 4799 flags.go:64] FLAG: --port="10250" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882314 4799 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882321 4799 flags.go:64] FLAG: --provider-id="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882326 4799 flags.go:64] FLAG: --qos-reserved="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882332 4799 flags.go:64] FLAG: --read-only-port="10255" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882337 4799 flags.go:64] FLAG: --register-node="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882343 4799 flags.go:64] FLAG: --register-schedulable="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882348 4799 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882358 4799 flags.go:64] FLAG: --registry-burst="10" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882363 4799 flags.go:64] FLAG: --registry-qps="5" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882369 4799 flags.go:64] FLAG: --reserved-cpus="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882375 4799 flags.go:64] FLAG: --reserved-memory="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882383 4799 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882388 4799 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882394 4799 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882399 4799 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882405 4799 flags.go:64] FLAG: --runonce="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882410 4799 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882416 4799 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882422 4799 flags.go:64] FLAG: --seccomp-default="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882427 4799 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882433 4799 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882438 4799 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882444 4799 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882450 4799 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882456 4799 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882466 4799 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882472 4799 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882478 4799 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882485 4799 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882491 4799 flags.go:64] FLAG: --system-cgroups="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882497 4799 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882506 4799 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882512 4799 flags.go:64] FLAG: --tls-cert-file="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882517 4799 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882529 4799 flags.go:64] FLAG: --tls-min-version="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882535 4799 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882540 4799 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882546 4799 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882552 4799 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882558 4799 flags.go:64] FLAG: --v="2" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882565 4799 flags.go:64] FLAG: --version="false" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882572 4799 flags.go:64] FLAG: --vmodule="" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882579 4799 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.882585 4799 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882728 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882737 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882744 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882749 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882754 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882759 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882764 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882769 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882773 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882778 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882783 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882790 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882795 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882809 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882815 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882820 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882826 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882832 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882838 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882843 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882851 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882856 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882862 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882867 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882872 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882877 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882883 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882887 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882892 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882897 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882902 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882907 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882912 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882917 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882922 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882929 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882934 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882939 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882946 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882951 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882956 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882961 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882966 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882970 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882975 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882982 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882987 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882992 4799 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.882997 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883002 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883007 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883012 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883019 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883024 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883028 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883035 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883041 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883047 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883052 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883057 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883062 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883067 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883073 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883078 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883083 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883088 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883093 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883098 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883103 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883108 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.883113 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.883965 4799 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.895815 4799 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.896517 4799 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896777 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896799 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896811 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896822 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896832 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896844 4799 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896855 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896865 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896876 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896886 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896895 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896905 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896915 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896924 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896933 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896943 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896953 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896963 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896973 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896982 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.896991 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897001 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897010 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897020 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897031 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897040 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897050 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897061 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897071 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897080 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897089 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897099 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897110 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897119 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897212 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897223 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897234 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897244 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897253 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897263 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897273 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897282 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897292 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897302 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897311 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897324 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897338 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897349 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897360 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897370 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897381 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897390 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897423 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897433 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897443 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897455 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897469 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897482 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897494 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897507 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897518 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897527 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897537 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897549 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897562 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897574 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897585 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897596 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897608 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897619 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.897649 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.897667 4799 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898048 4799 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898085 4799 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898097 4799 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898109 4799 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898119 4799 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898164 4799 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898175 4799 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898186 4799 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898196 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898206 4799 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898216 4799 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898227 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898237 4799 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898247 4799 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898258 4799 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898267 4799 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898277 4799 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898286 4799 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898296 4799 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898306 4799 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898316 4799 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898326 4799 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898335 4799 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898346 4799 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898356 4799 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898366 4799 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898375 4799 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898385 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898398 4799 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898408 4799 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898421 4799 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898434 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898445 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898455 4799 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898483 4799 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898493 4799 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898502 4799 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898513 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898523 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898532 4799 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898543 4799 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898552 4799 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898561 4799 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898572 4799 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898581 4799 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898590 4799 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898599 4799 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898612 4799 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898623 4799 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898632 4799 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898644 4799 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898657 4799 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898668 4799 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898678 4799 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898688 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898698 4799 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898708 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898717 4799 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898727 4799 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898737 4799 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898747 4799 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898756 4799 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898766 4799 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898779 4799 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898791 4799 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898803 4799 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898812 4799 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898823 4799 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898834 4799 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898844 4799 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:31:34 crc kubenswrapper[4799]: W0216 12:31:34.898871 4799 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.898889 4799 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.899389 4799 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.909185 4799 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.909357 4799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.910978 4799 server.go:997] "Starting client certificate rotation" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.911018 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.912810 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-26 20:42:47.681545751 +0000 UTC Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.913021 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.946750 4799 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:31:34 crc kubenswrapper[4799]: E0216 12:31:34.950390 4799 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.952347 4799 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:31:34 crc kubenswrapper[4799]: I0216 12:31:34.975987 4799 log.go:25] "Validated CRI v1 runtime API" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.014772 4799 log.go:25] "Validated CRI v1 image API" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.017087 4799 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.021229 4799 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-12-26-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.021275 4799 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.043481 4799 manager.go:217] Machine: {Timestamp:2026-02-16 12:31:35.038892158 +0000 UTC m=+0.631907532 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:25cac3c5-4ae9-4428-b3ff-f389dbe91e52 BootID:60d89bd8-e3f6-4a9b-86b3-b3b67634d734 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7e:ea:7a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7e:ea:7a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:64:ec:97 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ed:7d:a2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ae:1e:38 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7b:e8:d1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:3e:6d:6f:b1:0f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:ad:6d:87:11:30 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.043817 4799 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.044019 4799 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.045600 4799 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.045843 4799 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.045891 4799 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.046185 4799 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.046199 4799 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.046643 4799 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.046670 4799 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.047498 4799 state_mem.go:36] "Initialized new in-memory state store" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.047626 4799 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.052315 4799 kubelet.go:418] "Attempting to sync node with API server" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.052363 4799 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.052450 4799 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.052483 4799 kubelet.go:324] "Adding apiserver pod source" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.052505 4799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.057948 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.057908 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.058064 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.058096 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.058175 4799 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.059213 4799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.060619 4799 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062554 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062580 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062587 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062594 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062605 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062613 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062620 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062632 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062643 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062650 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062662 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.062669 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.063499 4799 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.064205 4799 server.go:1280] "Started kubelet" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.064952 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.065579 4799 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.065581 4799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.066640 4799 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 12:31:35 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.067190 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.067223 4799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.067739 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:19:04.176051778 +0000 UTC Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.068071 4799 server.go:460] "Adding debug handlers to kubelet server" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.068106 4799 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.068188 4799 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.068491 4799 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.071845 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.072686 4799 factory.go:55] Registering systemd factory Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.072718 4799 factory.go:221] Registration of the systemd container factory successfully Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.073327 4799 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.074945 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.075035 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.075264 4799 factory.go:153] Registering CRI-O factory Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.075293 4799 factory.go:221] Registration of the crio container factory successfully Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.075405 4799 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.075776 4799 factory.go:103] Registering Raw factory Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.075998 4799 manager.go:1196] Started watching for new ooms in manager Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.082887 4799 manager.go:319] Starting recovery of all containers Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.082178 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894ba078d73e283 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:31:35.064167043 +0000 UTC m=+0.657182377,LastTimestamp:2026-02-16 12:31:35.064167043 +0000 UTC m=+0.657182377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095216 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095327 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095357 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095384 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095410 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095435 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095460 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095496 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095534 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095561 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095584 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095610 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095635 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095688 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095714 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095739 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095770 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095792 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095813 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095832 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095857 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095882 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095911 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095939 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095961 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.095984 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096009 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096030 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096049 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096067 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096084 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096101 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096155 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096174 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096222 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096243 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096261 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096283 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096306 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096333 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096358 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096382 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096407 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096434 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096459 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096484 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096506 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096528 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096550 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096570 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096589 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096610 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096636 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096659 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096679 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096700 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096726 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096747 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096771 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096788 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096805 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096824 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096843 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096866 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096898 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096919 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096938 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096956 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096973 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.096990 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097008 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097026 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097046 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097064 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097083 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097106 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097250 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097274 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097298 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.097319 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098255 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098293 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098339 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098365 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098386 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098429 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098451 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098480 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098504 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098526 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098562 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098592 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098620 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098705 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098774 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098816 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098835 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098852 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098879 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098895 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098912 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098936 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098950 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.098977 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099007 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099037 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099067 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099087 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099150 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099174 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099207 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099235 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099252 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099277 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099304 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099320 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099343 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099364 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099385 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099405 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099456 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099474 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099493 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099509 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099530 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099549 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099566 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099588 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099602 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099628 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099648 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099666 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099692 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099712 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099737 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099757 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099776 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099803 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099821 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099840 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099862 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099882 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099902 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099917 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099939 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099960 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.099978 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105038 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105105 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105132 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105149 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105162 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105178 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105196 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105209 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105224 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105239 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105278 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.105309 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.109761 4799 manager.go:324] Recovery completed Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.109855 4799 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.110880 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.110909 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.110929 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.110952 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.110974 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.110995 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111013 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111029 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111047 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111065 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111082 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111099 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111116 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111154 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111172 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111190 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111208 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111227 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111243 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111260 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111278 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111294 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111334 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111352 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111370 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111388 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111406 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111423 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111438 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111454 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111473 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111491 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111508 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111524 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111540 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111557 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111577 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111597 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111616 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111634 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111650 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111666 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111685 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111701 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111717 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111732 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111748 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111763 4799 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111779 4799 reconstruct.go:97] "Volume reconstruction finished" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.111790 4799 reconciler.go:26] "Reconciler: start to sync state" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.127486 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.129486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.129623 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.129653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.131052 4799 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.131081 4799 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.131107 4799 state_mem.go:36] "Initialized new in-memory state store" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.145119 4799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.147804 4799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.147908 4799 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.147993 4799 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.148115 4799 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.148866 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.149004 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.155189 4799 policy_none.go:49] "None policy: Start" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.156442 4799 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.156487 4799 state_mem.go:35] "Initializing new in-memory state store" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.169387 4799 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.248384 4799 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.248513 4799 manager.go:334] "Starting Device Plugin manager" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.249737 4799 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.249785 4799 server.go:79] "Starting device plugin registration server" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.250597 4799 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.250625 4799 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.250794 4799 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.250907 4799 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.250917 4799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.270047 4799 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.272945 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.356454 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.359349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.359411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.359434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.359474 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.360188 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.449256 4799 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.449400 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.450684 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.450724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.450737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.450855 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.451181 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.451256 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.451632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.451659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.451672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.451801 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452218 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452268 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452644 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452800 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.452956 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453090 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453172 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453590 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.453998 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454118 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454255 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454306 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454477 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.454531 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455386 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455398 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455405 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455524 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455765 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.455803 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.456718 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.456748 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.456762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517365 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517433 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517455 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517473 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517491 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517537 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517553 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517570 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517586 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517602 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517618 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517688 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.517747 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.561374 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.562836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.562915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.562941 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.562989 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.563747 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619670 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619759 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619810 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619832 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619856 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619876 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619914 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619934 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619957 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619975 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619980 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620049 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.619997 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620115 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620198 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620200 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620261 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620270 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620287 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620304 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620325 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620342 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620363 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620375 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620394 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.620503 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.674633 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.784588 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.795475 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.803379 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.824532 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.829269 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.868495 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2f25e19211647c8a91a095c05d59d43ed023062f1800709f3d7d8fa64e0d7683 WatchSource:0}: Error finding container 2f25e19211647c8a91a095c05d59d43ed023062f1800709f3d7d8fa64e0d7683: Status 404 returned error can't find the container with id 2f25e19211647c8a91a095c05d59d43ed023062f1800709f3d7d8fa64e0d7683 Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.872830 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7e6cbf0fa4076a62a8d27d710c5af18c84af9bc4c278c56c81e746ac24b291e0 WatchSource:0}: Error finding container 7e6cbf0fa4076a62a8d27d710c5af18c84af9bc4c278c56c81e746ac24b291e0: Status 404 returned error can't find the container with id 7e6cbf0fa4076a62a8d27d710c5af18c84af9bc4c278c56c81e746ac24b291e0 Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.875490 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-93392a5a78474c3533b8794c7a6e6d112c03da4277095f00d9b0997075118d43 WatchSource:0}: Error finding container 93392a5a78474c3533b8794c7a6e6d112c03da4277095f00d9b0997075118d43: Status 404 returned error can't find the container with id 93392a5a78474c3533b8794c7a6e6d112c03da4277095f00d9b0997075118d43 Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.952643 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.952825 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.964258 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.965873 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.965969 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.965992 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:35 crc kubenswrapper[4799]: I0216 12:31:35.966041 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.966798 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 16 12:31:35 crc kubenswrapper[4799]: W0216 12:31:35.984112 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:35 crc kubenswrapper[4799]: E0216 12:31:35.984222 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:36 crc kubenswrapper[4799]: W0216 12:31:36.016233 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:36 crc kubenswrapper[4799]: E0216 12:31:36.016314 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.066669 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.068588 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:09:40.673361978 +0000 UTC Feb 16 12:31:36 crc kubenswrapper[4799]: W0216 12:31:36.100109 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:36 crc kubenswrapper[4799]: E0216 12:31:36.100289 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.152600 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"295d5f72fff9b0767ea36f0b31de222f7ab992f64de1fc507f517e6c02da85de"} Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.155531 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"93392a5a78474c3533b8794c7a6e6d112c03da4277095f00d9b0997075118d43"} Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.156516 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e6cbf0fa4076a62a8d27d710c5af18c84af9bc4c278c56c81e746ac24b291e0"} Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.157352 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2f25e19211647c8a91a095c05d59d43ed023062f1800709f3d7d8fa64e0d7683"} Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.158419 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e8ff682733c1b56e798ff8982e3ba55168b8fa80cb199ea37d4c5bdfe6eb8b0"} Feb 16 12:31:36 crc kubenswrapper[4799]: E0216 12:31:36.475969 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.767760 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.769431 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.769482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.769498 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:36 crc kubenswrapper[4799]: I0216 12:31:36.769537 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:31:36 crc kubenswrapper[4799]: E0216 12:31:36.770163 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.030604 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:31:37 crc kubenswrapper[4799]: E0216 12:31:37.032917 4799 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.066571 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.069706 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:43:09.108155144 +0000 UTC Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.165513 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5"} Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.165608 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46"} Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.167643 4799 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f1e1de0c97cee7367af5fb8fdb7b1f68a630fec97fdc1883fafb7d9c49969871" exitCode=0 Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.167826 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f1e1de0c97cee7367af5fb8fdb7b1f68a630fec97fdc1883fafb7d9c49969871"} Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.167899 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.169952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.170002 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.170024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.170960 4799 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92" exitCode=0 Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.171099 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.171275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92"} Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.172324 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.172474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.172506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.174823 4799 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479" exitCode=0 Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.174972 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479"} Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.175101 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.176786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.176826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.176842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.178355 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a" exitCode=0 Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.178402 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a"} Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.178540 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.179568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.179616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.179634 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.185209 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.186350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.186394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:37 crc kubenswrapper[4799]: I0216 12:31:37.186406 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:37 crc kubenswrapper[4799]: W0216 12:31:37.911953 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:37 crc kubenswrapper[4799]: E0216 12:31:37.912080 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:38 crc kubenswrapper[4799]: W0216 12:31:38.064482 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:38 crc kubenswrapper[4799]: E0216 12:31:38.064581 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.065857 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.070061 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:21:43.779846214 +0000 UTC Feb 16 12:31:38 crc kubenswrapper[4799]: E0216 12:31:38.077184 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.188251 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"80217237c504698bf142a9eb0ffd021fb6fef992af71b475092d23cc32676cb6"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.188394 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.189419 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.189462 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.189474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.190773 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.190805 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.190818 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.190837 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.191776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.191808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.191821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.193682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.193707 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.193719 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.193728 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.196946 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.196974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.197015 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.197895 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.197923 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.197934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.200234 4799 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="72e4ec8673ce855443d574f0ab96a954f4e9a28f0b4215c47556fba8b203ced4" exitCode=0 Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.200279 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"72e4ec8673ce855443d574f0ab96a954f4e9a28f0b4215c47556fba8b203ced4"} Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.200341 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.201002 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.201069 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.201081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.370559 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.371889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.371937 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.371949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.371984 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:31:38 crc kubenswrapper[4799]: E0216 12:31:38.372499 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 16 12:31:38 crc kubenswrapper[4799]: I0216 12:31:38.762698 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.070606 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:01:58.841685432 +0000 UTC Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.207960 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae"} Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.208108 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.216222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.216297 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.216323 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217386 4799 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f27931794023c2cd1a1801c5b8ab025be7d5dc5db1717f1c30d4ffe0a1e7b6eb" exitCode=0 Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f27931794023c2cd1a1801c5b8ab025be7d5dc5db1717f1c30d4ffe0a1e7b6eb"} Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217545 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217701 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217810 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217719 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.217711 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.218965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219351 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219640 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.219672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.220305 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.220343 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.220363 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:39 crc kubenswrapper[4799]: I0216 12:31:39.733787 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.011271 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.071037 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:08:52.037369625 +0000 UTC Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225338 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2cc9b541179edb5a179197fcbe2eb19b4660661e2bcdd45c545532da9894695"} Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225436 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225452 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c4bb0a49d9cb3e9e3ca0e649faf132b7b26b9731e4d9ab9a19dc598bb92840da"} Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"60132d94b7f88363aa7a2f612c6dc28010c0f9d835a58475522f67e3cce3fcf3"} Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225480 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94089ffcbb06561da01941df4292bf94dd3453cc94822388d05ad90792b59b54"} Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225350 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.225488 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.226047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.226087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.226099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.226551 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.226574 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.226582 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.227282 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.227328 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:40 crc kubenswrapper[4799]: I0216 12:31:40.227339 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.072087 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:36:49.226337072 +0000 UTC Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.239037 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5463661d00aa1d634c5e4eee167eda16a90c16c3025bf5a7da4a10a2e52990ed"} Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.239188 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.239370 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.240826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.240857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.240867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.241172 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.241222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.241241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.359307 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.534085 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.534335 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.535801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.535837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.535851 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.572811 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.575040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.575112 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.575155 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.575197 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.645285 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 12:31:41 crc kubenswrapper[4799]: I0216 12:31:41.657081 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.072434 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:30:28.080730843 +0000 UTC Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.241790 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.243021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.243085 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.243102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.931506 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.931821 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.933809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.933874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.933889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:42 crc kubenswrapper[4799]: I0216 12:31:42.939961 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.072893 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:00:10.860218291 +0000 UTC Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.244845 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.244978 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.247017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.247081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.247090 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.247101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.247190 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:43 crc kubenswrapper[4799]: I0216 12:31:43.247249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:44 crc kubenswrapper[4799]: I0216 12:31:44.073193 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:06:22.659681875 +0000 UTC Feb 16 12:31:44 crc kubenswrapper[4799]: I0216 12:31:44.272329 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:44 crc kubenswrapper[4799]: I0216 12:31:44.272604 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:44 crc kubenswrapper[4799]: I0216 12:31:44.274666 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:44 crc kubenswrapper[4799]: I0216 12:31:44.274733 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:44 crc kubenswrapper[4799]: I0216 12:31:44.274755 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:45 crc kubenswrapper[4799]: I0216 12:31:45.073524 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:12:31.478238096 +0000 UTC Feb 16 12:31:45 crc kubenswrapper[4799]: E0216 12:31:45.270535 4799 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 12:31:46 crc kubenswrapper[4799]: I0216 12:31:46.073861 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:01:57.66863879 +0000 UTC Feb 16 12:31:47 crc kubenswrapper[4799]: I0216 12:31:47.074178 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:21:43.757892562 +0000 UTC Feb 16 12:31:47 crc kubenswrapper[4799]: I0216 12:31:47.272941 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:31:47 crc kubenswrapper[4799]: I0216 12:31:47.273008 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.074720 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:57:17.59556338 +0000 UTC Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.768599 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.768729 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.770251 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.770282 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.770291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:48 crc kubenswrapper[4799]: W0216 12:31:48.989938 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 12:31:48 crc kubenswrapper[4799]: I0216 12:31:48.990162 4799 trace.go:236] Trace[1751629296]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:31:38.987) (total time: 10002ms): Feb 16 12:31:48 crc kubenswrapper[4799]: Trace[1751629296]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:31:48.989) Feb 16 12:31:48 crc kubenswrapper[4799]: Trace[1751629296]: [10.002109716s] [10.002109716s] END Feb 16 12:31:48 crc kubenswrapper[4799]: E0216 12:31:48.990219 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 12:31:49 crc kubenswrapper[4799]: W0216 12:31:49.042671 4799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.042858 4799 trace.go:236] Trace[28620032]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:31:39.040) (total time: 10002ms): Feb 16 12:31:49 crc kubenswrapper[4799]: Trace[28620032]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (12:31:49.042) Feb 16 12:31:49 crc kubenswrapper[4799]: Trace[28620032]: [10.002380768s] [10.002380768s] END Feb 16 12:31:49 crc kubenswrapper[4799]: E0216 12:31:49.042914 4799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.067546 4799 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.074871 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:27:01.632004016 +0000 UTC Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.734797 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.734883 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.908692 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 12:31:49 crc kubenswrapper[4799]: I0216 12:31:49.908760 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 12:31:50 crc kubenswrapper[4799]: I0216 12:31:50.075262 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:46:01.463242079 +0000 UTC Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.075873 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:31:21.875641249 +0000 UTC Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.673187 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.673385 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.674427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.674473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.674485 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:51 crc kubenswrapper[4799]: I0216 12:31:51.685837 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 12:31:52 crc kubenswrapper[4799]: I0216 12:31:52.077012 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:17:50.572272369 +0000 UTC Feb 16 12:31:52 crc kubenswrapper[4799]: I0216 12:31:52.270284 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:52 crc kubenswrapper[4799]: I0216 12:31:52.271387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:52 crc kubenswrapper[4799]: I0216 12:31:52.271442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:52 crc kubenswrapper[4799]: I0216 12:31:52.271466 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:53 crc kubenswrapper[4799]: I0216 12:31:53.078194 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:25:14.000320767 +0000 UTC Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.058023 4799 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.079799 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:30:33.963597022 +0000 UTC Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.741863 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.742226 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.744573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.744653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.744681 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.747773 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.843784 4799 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 12:31:54 crc kubenswrapper[4799]: E0216 12:31:54.900038 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.904487 4799 trace.go:236] Trace[1601258145]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:31:41.906) (total time: 12998ms): Feb 16 12:31:54 crc kubenswrapper[4799]: Trace[1601258145]: ---"Objects listed" error: 12998ms (12:31:54.904) Feb 16 12:31:54 crc kubenswrapper[4799]: Trace[1601258145]: [12.998246853s] [12.998246853s] END Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.904520 4799 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.904859 4799 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 12:31:54 crc kubenswrapper[4799]: E0216 12:31:54.906229 4799 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.910873 4799 trace.go:236] Trace[1389850375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:31:42.887) (total time: 12023ms): Feb 16 12:31:54 crc kubenswrapper[4799]: Trace[1389850375]: ---"Objects listed" error: 12023ms (12:31:54.910) Feb 16 12:31:54 crc kubenswrapper[4799]: Trace[1389850375]: [12.023418618s] [12.023418618s] END Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.910915 4799 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.921113 4799 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.947515 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60132->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.947615 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60132->192.168.126.11:17697: read: connection reset by peer" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.947641 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60116->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.947852 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60116->192.168.126.11:17697: read: connection reset by peer" Feb 16 12:31:54 crc kubenswrapper[4799]: I0216 12:31:54.995675 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.001168 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.066901 4799 apiserver.go:52] "Watching apiserver" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.069938 4799 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.070452 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.070831 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.070833 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.070941 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.071260 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.071339 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.071736 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.071919 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.072061 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.072209 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.074185 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.074542 4799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.074883 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.075059 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.075072 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.075073 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.075545 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.076392 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.076534 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.079929 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:07:12.408898699 +0000 UTC Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.082544 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.100229 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105548 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105588 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105627 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105650 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105668 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105686 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105703 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105912 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105940 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.105966 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106000 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106020 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106040 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106063 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106093 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106111 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106153 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106175 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106193 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106210 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106228 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106248 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106266 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106286 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106315 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106340 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106379 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106397 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106441 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106462 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106508 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106528 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106550 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106568 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106603 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106622 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106640 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106658 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106681 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106700 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106689 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106718 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106826 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106865 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.106906 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107056 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107100 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107211 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107298 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107342 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107382 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107413 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107444 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107472 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107499 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107496 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107530 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107530 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107558 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107593 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107622 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107608 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107595 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107650 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107685 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107744 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107822 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107846 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107884 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107915 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107935 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107980 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.107961 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108037 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108017 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108141 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108201 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108257 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108342 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108354 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108406 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108481 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108545 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108621 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108716 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108674 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108803 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108832 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108839 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108870 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108925 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108959 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.108990 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109017 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109042 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109164 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109198 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109226 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109317 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109351 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109451 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109514 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109578 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109605 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109661 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109688 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109719 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109751 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109779 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109809 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109838 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109867 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109895 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109927 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110893 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110939 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110965 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110989 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111016 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111047 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111073 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111097 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111151 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111183 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111214 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111244 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111270 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111296 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111321 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111345 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111374 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111402 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111427 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111450 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111474 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111498 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111749 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111785 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111814 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111873 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111906 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111936 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111965 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111994 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112206 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112881 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112923 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112966 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112995 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113024 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113050 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113078 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113178 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113212 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113240 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113265 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113294 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113386 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113499 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113535 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113559 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113585 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113617 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113645 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113708 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113734 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113764 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114169 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114215 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114255 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114289 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114320 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114354 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114386 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114448 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114478 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114509 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114541 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114568 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114600 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114636 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115469 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115603 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115682 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115728 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115765 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115812 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115846 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115877 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115907 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115937 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115968 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115997 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116027 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116067 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116099 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116149 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116178 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116229 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116259 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116287 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116323 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116353 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116389 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116416 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116446 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116474 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116504 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116623 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117364 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117427 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117462 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109317 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.109739 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110453 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110532 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.110552 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111751 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.111856 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112098 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112469 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.112816 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113543 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113728 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113296 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.113933 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114272 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114570 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.114835 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115031 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115196 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115224 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115424 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115468 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115603 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115791 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115841 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.115972 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116032 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116762 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.116809 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117175 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117282 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117696 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.117826 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.118017 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.118070 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:31:55.618047398 +0000 UTC m=+21.211062732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120640 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120696 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120765 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120802 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120846 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120878 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120906 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.120937 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121031 4799 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121051 4799 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121069 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121087 4799 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121102 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121119 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121158 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121174 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121188 4799 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121204 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121221 4799 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121237 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121253 4799 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121267 4799 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121282 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121300 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121314 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121330 4799 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121347 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.121366 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.123612 4799 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.123941 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.123964 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.124445 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.124458 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.118215 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.118461 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.118636 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.118873 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.119417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.119611 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.124470 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.124683 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125049 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125115 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125213 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125336 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125466 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125795 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125876 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.125882 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126026 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126276 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126303 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126390 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126414 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126199 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126682 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126744 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.126806 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.127099 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.127171 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.127281 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.127578 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.127656 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.127918 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.128170 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.128527 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.129022 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.129670 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.129894 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.129990 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.130021 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.130014 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.118172 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.131452 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.131539 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132083 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132075 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132380 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132517 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132612 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132771 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132788 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132851 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.132940 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.133554 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.133726 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.133859 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.133932 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.133949 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:55.633917699 +0000 UTC m=+21.226933043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.134172 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.134363 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.135541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.135695 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.135734 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.135601 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.135933 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.136143 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.136209 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:55.636193683 +0000 UTC m=+21.229209027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.136652 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.137667 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.138402 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.138628 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.131601 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.147578 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.147913 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.148038 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.147763 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.148335 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.149196 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.149598 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.149645 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.149671 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.150391 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.150789 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.151192 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.152073 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.152381 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.152429 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.152449 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.152624 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.153471 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.153500 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.153515 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.153599 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:55.653575277 +0000 UTC m=+21.246590801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.154498 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.154370 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.154803 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.154888 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.155105 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.155145 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.155158 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.155208 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:55.655191463 +0000 UTC m=+21.248206797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.155301 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.155460 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.155873 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156156 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156173 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156201 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156223 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156080 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156644 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.156909 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.157057 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.159964 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.160249 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.160879 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.161059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.161545 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.161677 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.162671 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.162797 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.164301 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.165040 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167173 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167293 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167325 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167425 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167602 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167621 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.167825 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.168073 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.168198 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.168875 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.168937 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.168901 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.168919 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.169312 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.169356 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.169953 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.169964 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170170 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170203 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170382 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170482 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170519 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170724 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.170727 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.171285 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.171336 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.171335 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.171404 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.171576 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.172465 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.172948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.173432 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.173616 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.173923 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.174892 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.177084 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.177533 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.177643 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.178484 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.178581 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.179089 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.179175 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.179471 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.179850 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.180434 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.181797 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.182976 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.183507 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.184730 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.186865 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.190021 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.190420 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.190990 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.191776 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.193495 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.194037 4799 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.194235 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.197318 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.198657 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.199846 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.200453 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.202820 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.204588 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.205305 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.207104 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.207267 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.207955 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.209535 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.212197 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.212657 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.213226 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.214802 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.215865 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.217076 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.217819 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.218943 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.219734 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.219979 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.220686 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.221275 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222226 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222507 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222522 4799 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222534 4799 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222544 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222554 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222564 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222574 4799 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222584 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222593 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222602 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222611 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222696 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222818 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.222841 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223297 4799 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223452 4799 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223487 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223505 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223521 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223540 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223545 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223652 4799 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223669 4799 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223683 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223696 4799 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223708 4799 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223721 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223733 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223744 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223755 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223768 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223782 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223826 4799 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223840 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.223997 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224039 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224052 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224064 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224070 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224075 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224118 4799 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224141 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224153 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224165 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224176 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224189 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224200 4799 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224210 4799 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224220 4799 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224230 4799 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224241 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224252 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224262 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224272 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224282 4799 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224293 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224304 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224314 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224325 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224334 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224345 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224355 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224366 4799 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224375 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224387 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224399 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224409 4799 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224421 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224431 4799 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224442 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224451 4799 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224461 4799 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224471 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224481 4799 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224490 4799 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224499 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224511 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224520 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224530 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224539 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224549 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224558 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224568 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224578 4799 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224589 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224598 4799 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224607 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224616 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224626 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224637 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224647 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224676 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224685 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224695 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224705 4799 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224713 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224723 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224734 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224742 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224752 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224761 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224771 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224780 4799 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224788 4799 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224798 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224807 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224820 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224829 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224838 4799 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224847 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224857 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224867 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224877 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224886 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224895 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224906 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224915 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224924 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224933 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224942 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224951 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224961 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224972 4799 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224982 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.224993 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225003 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225013 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225024 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225034 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225044 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225053 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225062 4799 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225072 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225083 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225111 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225123 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225143 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225153 4799 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225165 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225176 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225237 4799 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225249 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225260 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225269 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225279 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225288 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225298 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225307 4799 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225317 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225326 4799 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225336 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225345 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225354 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225363 4799 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225372 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225382 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225394 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225403 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225413 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225422 4799 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225433 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225442 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225451 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225460 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225469 4799 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225478 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225487 4799 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225496 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225505 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225515 4799 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225524 4799 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225533 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225544 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225553 4799 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225562 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225572 4799 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225581 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.225591 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.231232 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.241644 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.252419 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.267659 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.279935 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.280648 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.281884 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae" exitCode=255 Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.281955 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae"} Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.289241 4799 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.291617 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.295014 4799 scope.go:117] "RemoveContainer" containerID="6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.295051 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.305439 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.316953 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.329908 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.344097 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.355417 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.365772 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.377905 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.392555 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.403428 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:31:55 crc kubenswrapper[4799]: W0216 12:31:55.407811 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3ca7bd0ca8b5a0fba7b5a5964b09a8c338467b88f531fc6bbcbfd09a32c777c4 WatchSource:0}: Error finding container 3ca7bd0ca8b5a0fba7b5a5964b09a8c338467b88f531fc6bbcbfd09a32c777c4: Status 404 returned error can't find the container with id 3ca7bd0ca8b5a0fba7b5a5964b09a8c338467b88f531fc6bbcbfd09a32c777c4 Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.417498 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.632701 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.633115 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:31:56.633084059 +0000 UTC m=+22.226099403 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.733682 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.733724 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.733754 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:55 crc kubenswrapper[4799]: I0216 12:31:55.733775 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734088 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734104 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734114 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734175 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:56.734162408 +0000 UTC m=+22.327177742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734507 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734610 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:56.73459298 +0000 UTC m=+22.327608314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734529 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734668 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:56.734658012 +0000 UTC m=+22.327673346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734533 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734699 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734712 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:55 crc kubenswrapper[4799]: E0216 12:31:55.734738 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:56.734731534 +0000 UTC m=+22.327746868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.080684 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:33:29.376638987 +0000 UTC Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.286647 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.288473 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.288873 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.290013 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f41fb1467321e67ee56afeb5824f2188cbf381067fa8741ab09355e67e7628a8"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.292259 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.292347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.292368 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b2c6a6c1f88ed2366551c8a5e84e6c5fe279e85bb3de2ab6f993c23803e87075"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.293753 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.293814 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3ca7bd0ca8b5a0fba7b5a5964b09a8c338467b88f531fc6bbcbfd09a32c777c4"} Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.321616 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.337828 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.369045 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.389983 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.428789 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.435578 4799 csr.go:261] certificate signing request csr-67nvq is approved, waiting to be issued Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.455321 4799 csr.go:257] certificate signing request csr-67nvq is issued Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.467928 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.479963 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.494061 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.508525 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.524698 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.540813 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.566430 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.580029 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.593096 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.605972 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.618234 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.640542 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.640762 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:31:58.640732295 +0000 UTC m=+24.233747629 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.741998 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.742062 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.742096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:56 crc kubenswrapper[4799]: I0216 12:31:56.742147 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742251 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742335 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742504 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:58.742454103 +0000 UTC m=+24.335469447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742352 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742576 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:58.742553296 +0000 UTC m=+24.335568630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742588 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742609 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742370 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742658 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742681 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742703 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:58.742678389 +0000 UTC m=+24.335693713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:56 crc kubenswrapper[4799]: E0216 12:31:56.742735 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:31:58.742720161 +0000 UTC m=+24.335735685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.082177 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:20:41.081256559 +0000 UTC Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.146399 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zl9jj"] Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.146855 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.148396 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.148396 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:57 crc kubenswrapper[4799]: E0216 12:31:57.148553 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:31:57 crc kubenswrapper[4799]: E0216 12:31:57.148681 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.148398 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:57 crc kubenswrapper[4799]: E0216 12:31:57.148818 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.150241 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.153318 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.153642 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.154179 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.155505 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.159986 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.160807 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.162168 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.163047 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.164324 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.165083 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.166111 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.166877 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.167796 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.168444 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.169426 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7j77r"] Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.169844 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.171454 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.171691 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.172262 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.172434 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.172553 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.175061 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.188054 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.200030 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.215853 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.229855 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247226 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-cni-multus\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247281 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-hostroot\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247314 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4w6z\" (UniqueName: \"kubernetes.io/projected/ff442c08-09db-4354-b9be-b43956019ba7-kube-api-access-h4w6z\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247480 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff442c08-09db-4354-b9be-b43956019ba7-cni-binary-copy\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247505 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-netns\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247527 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-kubelet\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247555 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ff442c08-09db-4354-b9be-b43956019ba7-multus-daemon-config\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247583 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrsp\" (UniqueName: \"kubernetes.io/projected/127d928e-7ce1-44a2-976e-de7017f78747-kube-api-access-6rrsp\") pod \"node-resolver-zl9jj\" (UID: \"127d928e-7ce1-44a2-976e-de7017f78747\") " pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247610 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-k8s-cni-cncf-io\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247662 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-os-release\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247692 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-socket-dir-parent\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247724 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/127d928e-7ce1-44a2-976e-de7017f78747-hosts-file\") pod \"node-resolver-zl9jj\" (UID: \"127d928e-7ce1-44a2-976e-de7017f78747\") " pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-system-cni-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247781 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-cnibin\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247789 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.247811 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-cni-bin\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.248049 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-multus-certs\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.248108 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-etc-kubernetes\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.248177 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-cni-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.248196 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-conf-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.281052 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.303883 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.320684 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.333829 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.347808 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349094 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrsp\" (UniqueName: \"kubernetes.io/projected/127d928e-7ce1-44a2-976e-de7017f78747-kube-api-access-6rrsp\") pod \"node-resolver-zl9jj\" (UID: \"127d928e-7ce1-44a2-976e-de7017f78747\") " pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349141 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-k8s-cni-cncf-io\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349181 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/127d928e-7ce1-44a2-976e-de7017f78747-hosts-file\") pod \"node-resolver-zl9jj\" (UID: \"127d928e-7ce1-44a2-976e-de7017f78747\") " pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349201 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-system-cni-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349218 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-cnibin\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349237 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-os-release\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-socket-dir-parent\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349272 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-cni-bin\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-cni-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349310 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-conf-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349326 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-multus-certs\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349342 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-etc-kubernetes\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349362 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-cni-multus\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349379 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-hostroot\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349372 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-k8s-cni-cncf-io\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349418 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4w6z\" (UniqueName: \"kubernetes.io/projected/ff442c08-09db-4354-b9be-b43956019ba7-kube-api-access-h4w6z\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349462 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-system-cni-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349498 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-conf-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349551 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff442c08-09db-4354-b9be-b43956019ba7-cni-binary-copy\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-cni-bin\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349480 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-cnibin\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349589 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-cni-multus\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349589 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-os-release\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-netns\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349599 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-hostroot\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349604 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-socket-dir-parent\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349598 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-multus-certs\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349665 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/127d928e-7ce1-44a2-976e-de7017f78747-hosts-file\") pod \"node-resolver-zl9jj\" (UID: \"127d928e-7ce1-44a2-976e-de7017f78747\") " pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349729 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-multus-cni-dir\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349749 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-etc-kubernetes\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349760 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-run-netns\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349813 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-kubelet\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349799 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ff442c08-09db-4354-b9be-b43956019ba7-host-var-lib-kubelet\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.349884 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ff442c08-09db-4354-b9be-b43956019ba7-multus-daemon-config\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.350407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff442c08-09db-4354-b9be-b43956019ba7-cni-binary-copy\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.350709 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ff442c08-09db-4354-b9be-b43956019ba7-multus-daemon-config\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.366095 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.370499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrsp\" (UniqueName: \"kubernetes.io/projected/127d928e-7ce1-44a2-976e-de7017f78747-kube-api-access-6rrsp\") pod \"node-resolver-zl9jj\" (UID: \"127d928e-7ce1-44a2-976e-de7017f78747\") " pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.372193 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4w6z\" (UniqueName: \"kubernetes.io/projected/ff442c08-09db-4354-b9be-b43956019ba7-kube-api-access-h4w6z\") pod \"multus-7j77r\" (UID: \"ff442c08-09db-4354-b9be-b43956019ba7\") " pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.382647 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.395482 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.411053 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.423475 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.438483 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.451885 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.456702 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 12:26:56 +0000 UTC, rotation deadline is 2026-11-25 05:26:31.073439286 +0000 UTC Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.456743 4799 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6760h54m33.616698785s for next certificate rotation Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.465308 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.466399 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zl9jj" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.483030 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7j77r" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.555328 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4p4qf"] Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.558152 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzcq6"] Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.558825 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-6dl99"] Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.559111 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.559273 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.559615 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.565487 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.565878 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566189 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566390 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566529 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566600 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566673 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566678 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.566767 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.567355 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.567501 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.567615 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.567729 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.567738 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.581766 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.594958 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.611343 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.627717 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.648181 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653638 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-var-lib-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653702 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-systemd-units\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653722 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-log-socket\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653793 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-netd\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653855 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-systemd\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653876 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-node-log\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653894 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-bin\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.653936 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-config\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-kubelet\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ae13b0a-1f69-476d-a552-4467fcedac14-ovn-node-metrics-cert\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nb7s\" (UniqueName: \"kubernetes.io/projected/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-kube-api-access-8nb7s\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654141 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654169 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrjz\" (UniqueName: \"kubernetes.io/projected/e36db86c-3626-446f-8410-7e1f42ed16e1-kube-api-access-qtrjz\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654188 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cnibin\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654220 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e36db86c-3626-446f-8410-7e1f42ed16e1-rootfs\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654241 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cni-binary-copy\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654263 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-ovn\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654289 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-script-lib\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654316 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e36db86c-3626-446f-8410-7e1f42ed16e1-mcd-auth-proxy-config\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654347 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-env-overrides\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654397 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-os-release\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654419 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654435 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-slash\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654452 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-etc-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654475 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-system-cni-dir\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654493 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654508 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-netns\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654545 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcvk2\" (UniqueName: \"kubernetes.io/projected/8ae13b0a-1f69-476d-a552-4467fcedac14-kube-api-access-mcvk2\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.654590 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e36db86c-3626-446f-8410-7e1f42ed16e1-proxy-tls\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.668083 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.683515 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.708529 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.731730 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.744022 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755607 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-script-lib\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755659 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e36db86c-3626-446f-8410-7e1f42ed16e1-rootfs\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755685 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cni-binary-copy\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755709 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-ovn\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755733 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e36db86c-3626-446f-8410-7e1f42ed16e1-mcd-auth-proxy-config\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755752 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-env-overrides\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755784 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-os-release\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755802 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755820 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755840 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-slash\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755861 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-etc-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755862 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e36db86c-3626-446f-8410-7e1f42ed16e1-rootfs\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755931 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-slash\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755964 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-etc-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755871 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-ovn\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755893 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-system-cni-dir\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756038 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-netns\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.755963 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-system-cni-dir\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756115 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-netns\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756086 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756175 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-os-release\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756066 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756276 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756305 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756373 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcvk2\" (UniqueName: \"kubernetes.io/projected/8ae13b0a-1f69-476d-a552-4467fcedac14-kube-api-access-mcvk2\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756389 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e36db86c-3626-446f-8410-7e1f42ed16e1-proxy-tls\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756459 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-var-lib-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756510 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-systemd-units\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756541 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-log-socket\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-netd\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756590 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-config\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756618 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-systemd\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756637 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-node-log\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756659 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-bin\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756691 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-kubelet\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756716 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756740 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ae13b0a-1f69-476d-a552-4467fcedac14-ovn-node-metrics-cert\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756767 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nb7s\" (UniqueName: \"kubernetes.io/projected/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-kube-api-access-8nb7s\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756802 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrjz\" (UniqueName: \"kubernetes.io/projected/e36db86c-3626-446f-8410-7e1f42ed16e1-kube-api-access-qtrjz\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cnibin\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.756943 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cnibin\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757014 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-systemd\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757058 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-node-log\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757107 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-bin\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757164 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-systemd-units\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757201 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-kubelet\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757096 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-env-overrides\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757241 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cni-binary-copy\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757231 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-var-lib-openvswitch\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757306 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757310 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-log-socket\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757333 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e36db86c-3626-446f-8410-7e1f42ed16e1-mcd-auth-proxy-config\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757342 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-netd\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757480 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757514 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-script-lib\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.757732 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-config\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.762156 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e36db86c-3626-446f-8410-7e1f42ed16e1-proxy-tls\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.763668 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ae13b0a-1f69-476d-a552-4467fcedac14-ovn-node-metrics-cert\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.764495 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.777048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrjz\" (UniqueName: \"kubernetes.io/projected/e36db86c-3626-446f-8410-7e1f42ed16e1-kube-api-access-qtrjz\") pod \"machine-config-daemon-6dl99\" (UID: \"e36db86c-3626-446f-8410-7e1f42ed16e1\") " pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.780079 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcvk2\" (UniqueName: \"kubernetes.io/projected/8ae13b0a-1f69-476d-a552-4467fcedac14-kube-api-access-mcvk2\") pod \"ovnkube-node-mzcq6\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.780643 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nb7s\" (UniqueName: \"kubernetes.io/projected/cd92d23b-8231-4e15-8dd4-5b912d6b6b42-kube-api-access-8nb7s\") pod \"multus-additional-cni-plugins-4p4qf\" (UID: \"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\") " pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.807289 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.852400 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.877956 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.877926 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.887046 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.893837 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:31:57 crc kubenswrapper[4799]: W0216 12:31:57.928840 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae13b0a_1f69_476d_a552_4467fcedac14.slice/crio-ccdacabc2c0f599d71b956add2a5204bd979482321617ff9f5fd5d70407efb56 WatchSource:0}: Error finding container ccdacabc2c0f599d71b956add2a5204bd979482321617ff9f5fd5d70407efb56: Status 404 returned error can't find the container with id ccdacabc2c0f599d71b956add2a5204bd979482321617ff9f5fd5d70407efb56 Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.929111 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.966561 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:57 crc kubenswrapper[4799]: I0216 12:31:57.993768 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.020793 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.038073 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.057848 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.082665 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:31:23.90621867 +0000 UTC Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.090478 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.106777 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.129854 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.145609 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.165591 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.189666 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.303913 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.306194 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc" exitCode=0 Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.306264 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.306303 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"ccdacabc2c0f599d71b956add2a5204bd979482321617ff9f5fd5d70407efb56"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.308445 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerStarted","Data":"e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.308506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerStarted","Data":"1ed7be57d6bc9c00d130a2552f40da3ab2117fa3bcb627babc6b3ef6a205903b"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.310225 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.310253 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.310270 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"21ad53e3e2c9820e34ba08bfff3169c7977b0bd12da56e365cfb0a063a93091a"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.311699 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerStarted","Data":"be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.311737 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerStarted","Data":"790b0d2730bc46e5f9ba751348ec744f894b81e8e9bb36b55446adeb912125e1"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.313585 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zl9jj" event={"ID":"127d928e-7ce1-44a2-976e-de7017f78747","Type":"ContainerStarted","Data":"17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.313660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zl9jj" event={"ID":"127d928e-7ce1-44a2-976e-de7017f78747","Type":"ContainerStarted","Data":"c20634463c5a75434867f24ff3248cf781436be59605881642f31557857a4931"} Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.316713 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.332354 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.349321 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.363845 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.378968 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.393419 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.406813 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.422414 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.445917 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.460674 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.475430 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.541919 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.557951 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.573279 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.586035 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.601837 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.618706 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.636830 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.650584 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.665255 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.680219 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.681537 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.681847 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:32:02.68179696 +0000 UTC m=+28.274812294 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.693054 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.712501 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.734711 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.756708 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.773370 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:58Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.782829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.782889 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.782919 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:58 crc kubenswrapper[4799]: I0216 12:31:58.782943 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783053 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783105 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783143 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783072 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783160 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783179 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:02.783155438 +0000 UTC m=+28.376170772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783156 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783201 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:02.783189129 +0000 UTC m=+28.376204473 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783210 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783260 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783217 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:02.783211719 +0000 UTC m=+28.376227053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:58 crc kubenswrapper[4799]: E0216 12:31:58.783308 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:02.783297232 +0000 UTC m=+28.376312746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.083013 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:58:21.281203879 +0000 UTC Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.148833 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:31:59 crc kubenswrapper[4799]: E0216 12:31:59.149019 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.149470 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:31:59 crc kubenswrapper[4799]: E0216 12:31:59.149553 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.149705 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:31:59 crc kubenswrapper[4799]: E0216 12:31:59.149785 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.317998 4799 generic.go:334] "Generic (PLEG): container finished" podID="cd92d23b-8231-4e15-8dd4-5b912d6b6b42" containerID="e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf" exitCode=0 Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.318094 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerDied","Data":"e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.321993 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.322041 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.322054 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.322065 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.322078 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.322087 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6"} Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.345760 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.368263 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.378326 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.393938 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.413218 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.424990 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.437871 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.450813 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.462998 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.476822 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.487876 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.499914 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:31:59 crc kubenswrapper[4799]: I0216 12:31:59.511037 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:31:59Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.084547 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:35:01.07579732 +0000 UTC Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.328550 4799 generic.go:334] "Generic (PLEG): container finished" podID="cd92d23b-8231-4e15-8dd4-5b912d6b6b42" containerID="1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9" exitCode=0 Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.328611 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerDied","Data":"1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9"} Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.346623 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.372558 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.392738 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.408617 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.425217 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.448158 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.464215 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.479879 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.497001 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.514608 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.525861 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.546879 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:00 crc kubenswrapper[4799]: I0216 12:32:00.568950 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.085500 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:29:52.994910081 +0000 UTC Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.148830 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.149058 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.149138 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.149227 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.149409 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.149522 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.306547 4799 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.309151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.309192 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.309206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.309408 4799 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.319722 4799 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.320059 4799 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.321762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.321842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.321864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.321893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.321914 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.334600 4799 generic.go:334] "Generic (PLEG): container finished" podID="cd92d23b-8231-4e15-8dd4-5b912d6b6b42" containerID="d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a" exitCode=0 Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.334667 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerDied","Data":"d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.343788 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.346917 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l8kgf"] Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.347591 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.350296 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.352356 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.353318 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.354828 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.367417 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.372566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.372602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.372623 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.372641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.372653 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.382968 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.394620 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.398971 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.399032 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.399047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.399068 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.399081 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.401479 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.411958 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aa8c3669-05bd-45dd-8769-b8dac50ff193-serviceca\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.412025 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa8c3669-05bd-45dd-8769-b8dac50ff193-host\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.412085 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wb6v\" (UniqueName: \"kubernetes.io/projected/aa8c3669-05bd-45dd-8769-b8dac50ff193-kube-api-access-9wb6v\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.412616 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.416898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.417199 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.417221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.417245 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.417262 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.417604 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.428513 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.433522 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.439506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.439559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.439574 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.439594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.439606 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.445402 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.460215 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: E0216 12:32:01.460739 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.466778 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.467951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.467985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.467996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.468203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.468216 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.484846 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.500356 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.511934 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.513842 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wb6v\" (UniqueName: \"kubernetes.io/projected/aa8c3669-05bd-45dd-8769-b8dac50ff193-kube-api-access-9wb6v\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.514031 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aa8c3669-05bd-45dd-8769-b8dac50ff193-serviceca\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.514108 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa8c3669-05bd-45dd-8769-b8dac50ff193-host\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.514253 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa8c3669-05bd-45dd-8769-b8dac50ff193-host\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.515167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aa8c3669-05bd-45dd-8769-b8dac50ff193-serviceca\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.525523 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.535208 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wb6v\" (UniqueName: \"kubernetes.io/projected/aa8c3669-05bd-45dd-8769-b8dac50ff193-kube-api-access-9wb6v\") pod \"node-ca-l8kgf\" (UID: \"aa8c3669-05bd-45dd-8769-b8dac50ff193\") " pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.540148 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.557151 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.571578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.571632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.571642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.571659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.571669 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.574539 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.588995 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.603743 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.616273 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.633630 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.649307 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.666660 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.673869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.673917 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.673952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.673973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.673986 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.681457 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l8kgf" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.687945 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.703053 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.717914 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.733706 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.752320 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: W0216 12:32:01.760452 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa8c3669_05bd_45dd_8769_b8dac50ff193.slice/crio-d9fc722c36ff8c57944a7ada868def79f96edd52f3a4e5678ce0f8c6c36bc158 WatchSource:0}: Error finding container d9fc722c36ff8c57944a7ada868def79f96edd52f3a4e5678ce0f8c6c36bc158: Status 404 returned error can't find the container with id d9fc722c36ff8c57944a7ada868def79f96edd52f3a4e5678ce0f8c6c36bc158 Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.766815 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.777098 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.777183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.777211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.777238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.777258 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.784107 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.797380 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.880410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.880788 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.880799 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.880816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.880828 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.983678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.983736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.983747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.983766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:01 crc kubenswrapper[4799]: I0216 12:32:01.983778 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:01Z","lastTransitionTime":"2026-02-16T12:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.090556 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:43:14.618985612 +0000 UTC Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.094702 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.094766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.094786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.094813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.094832 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.198072 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.198668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.198872 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.199046 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.199246 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.302455 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.302514 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.302528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.302553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.302566 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.353825 4799 generic.go:334] "Generic (PLEG): container finished" podID="cd92d23b-8231-4e15-8dd4-5b912d6b6b42" containerID="90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021" exitCode=0 Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.354178 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerDied","Data":"90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.356274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l8kgf" event={"ID":"aa8c3669-05bd-45dd-8769-b8dac50ff193","Type":"ContainerStarted","Data":"b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.356317 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l8kgf" event={"ID":"aa8c3669-05bd-45dd-8769-b8dac50ff193","Type":"ContainerStarted","Data":"d9fc722c36ff8c57944a7ada868def79f96edd52f3a4e5678ce0f8c6c36bc158"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.370249 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.387009 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.401958 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.407650 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.407740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.407772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.407821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.407845 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.422089 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.441356 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.456205 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.470967 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.482915 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.513843 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.518708 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.518761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.518777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.518806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.518825 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.552184 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.576008 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.595854 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.613764 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.621570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.621613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.621625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.621642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.621652 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.629816 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.646957 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.659529 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.673052 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.683841 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.697104 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.710053 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.722334 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.723890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.723945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.723955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.723972 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.724001 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.726723 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.726887 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:32:10.726854195 +0000 UTC m=+36.319869539 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.736550 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.755475 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.782007 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.800372 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.815959 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827451 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827544 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827562 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827685 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827757 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827812 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.827855 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.827936 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.827967 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.827936 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828012 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828032 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.827992 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828079 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828088 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:10.828049798 +0000 UTC m=+36.421065172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828119 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:10.82810562 +0000 UTC m=+36.421120994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828230 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:10.828209393 +0000 UTC m=+36.421224947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828275 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: E0216 12:32:02.828355 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:10.828337416 +0000 UTC m=+36.421352750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.828810 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.841076 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.931483 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.931545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.931559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.931589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:02 crc kubenswrapper[4799]: I0216 12:32:02.931618 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:02Z","lastTransitionTime":"2026-02-16T12:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.035205 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.035250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.035262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.035281 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.035293 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.091273 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:09:29.239765045 +0000 UTC Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.140614 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.140762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.140783 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.140809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.140830 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.149288 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.150043 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:03 crc kubenswrapper[4799]: E0216 12:32:03.150239 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.150848 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:03 crc kubenswrapper[4799]: E0216 12:32:03.150981 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:03 crc kubenswrapper[4799]: E0216 12:32:03.151099 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.243353 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.243397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.243409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.243432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.243446 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.347582 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.347764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.347787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.347813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.347830 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.373206 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerStarted","Data":"f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.411878 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.435892 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.452952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.453004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.453020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.453045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.453064 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.459266 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.477466 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.503380 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.527229 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.548521 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.557075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.557194 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.557228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.557262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.557286 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.570674 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.593385 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.616470 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.636062 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.656307 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.661236 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.661321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.661349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.661384 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.661413 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.677101 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.698727 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.764886 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.764943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.764963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.764990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.765006 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.868433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.868487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.868506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.868536 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.868555 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.972753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.972847 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.972860 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.972897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:03 crc kubenswrapper[4799]: I0216 12:32:03.972907 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:03Z","lastTransitionTime":"2026-02-16T12:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.075906 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.075955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.075972 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.075993 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.076016 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.092373 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:39:00.611190256 +0000 UTC Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.179752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.179825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.179844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.179875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.179894 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.282364 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.282425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.282444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.282467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.282483 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.382529 4799 generic.go:334] "Generic (PLEG): container finished" podID="cd92d23b-8231-4e15-8dd4-5b912d6b6b42" containerID="f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd" exitCode=0 Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.382662 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerDied","Data":"f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.390028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.390104 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.390165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.390201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.390228 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.394493 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.394946 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.394975 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.406457 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.425835 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.462678 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.467743 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.476331 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.494390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.494444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.494463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.494486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.494504 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.494668 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.516173 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.548423 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.571671 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.588213 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.597729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.597812 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.597834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.597863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.597882 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.604965 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.622241 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.643174 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.663452 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.677781 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.689699 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.700778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.701088 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.701293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.701400 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.701492 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.704985 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.718042 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.735731 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.753815 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.767858 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.780945 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.793928 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.804986 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.805052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.805067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.805091 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.805106 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.806543 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.823664 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.845000 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.858056 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.879426 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.894097 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.908543 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.908608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.908624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.908652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.908672 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:04Z","lastTransitionTime":"2026-02-16T12:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.910516 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:04Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:04 crc kubenswrapper[4799]: I0216 12:32:04.911534 4799 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.012293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.012329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.012341 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.012358 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.012371 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.093320 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:54:01.988285354 +0000 UTC Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.115573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.115745 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.115829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.115946 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.116026 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.149066 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.149066 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.149802 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:05 crc kubenswrapper[4799]: E0216 12:32:05.149868 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:05 crc kubenswrapper[4799]: E0216 12:32:05.150062 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:05 crc kubenswrapper[4799]: E0216 12:32:05.150175 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.171050 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.192846 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.218697 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.219433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.219473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.219486 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.219508 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.219520 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.232314 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.251068 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.274276 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.288726 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.307589 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.321799 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.321853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.321870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.321892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.321908 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.327957 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.343889 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.363189 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.382906 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.395477 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.402912 4799 generic.go:334] "Generic (PLEG): container finished" podID="cd92d23b-8231-4e15-8dd4-5b912d6b6b42" containerID="0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae" exitCode=0 Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.403000 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerDied","Data":"0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.403239 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.416209 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.424728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.424790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.424804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.424827 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.424841 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.435383 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.452221 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.467662 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.479519 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.499276 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.530785 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.532604 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.532916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.533141 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.533323 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.533474 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.549464 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.569588 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.584285 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.599835 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.616644 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.631759 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.637777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.637824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.637836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.637856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.637869 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.649037 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.660179 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:05Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.742350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.742403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.742418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.742440 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.742477 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.845394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.845450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.845465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.845487 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.845503 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.948501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.948545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.948554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.948575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:05 crc kubenswrapper[4799]: I0216 12:32:05.948588 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:05Z","lastTransitionTime":"2026-02-16T12:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.051557 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.051653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.051665 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.051682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.051692 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.094236 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:14:39.647194258 +0000 UTC Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.155454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.155507 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.155524 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.155546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.155560 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.258407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.258454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.258463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.258480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.258491 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.361293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.361366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.361388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.361420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.361445 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.413967 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" event={"ID":"cd92d23b-8231-4e15-8dd4-5b912d6b6b42","Type":"ContainerStarted","Data":"bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.414021 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.437742 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.458340 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.465061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.465118 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.465163 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.465187 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.465207 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.475461 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.500324 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.519109 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.538504 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.559725 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.568475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.568650 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.568731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.568816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.568899 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.580180 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.594956 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.613967 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.633491 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.651205 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.667554 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.672092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.672283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.672307 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.672338 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.672358 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.686232 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:06Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.783882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.783959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.783984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.784019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.784045 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.886733 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.886786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.886800 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.886822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.886837 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.989299 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.989379 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.989392 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.989412 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:06 crc kubenswrapper[4799]: I0216 12:32:06.989426 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:06Z","lastTransitionTime":"2026-02-16T12:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.091675 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.091724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.091732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.091747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.091756 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.094872 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:24:41.546535237 +0000 UTC Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.108851 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.129766 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.148314 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.148395 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.148339 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:07 crc kubenswrapper[4799]: E0216 12:32:07.148520 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:07 crc kubenswrapper[4799]: E0216 12:32:07.148569 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:07 crc kubenswrapper[4799]: E0216 12:32:07.148664 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.152607 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.167850 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.182573 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.194593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.194651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.194675 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.194706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.194728 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.201299 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.228927 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.246881 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.266393 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.287853 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.297914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.297971 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.297987 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.298009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.298026 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.307900 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.327890 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.351267 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.374891 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.395405 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.401254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.401303 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.401341 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.401364 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.401380 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.419731 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/0.log" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.423980 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f" exitCode=1 Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.424034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.424991 4799 scope.go:117] "RemoveContainer" containerID="716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.448834 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.469729 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.493560 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.503746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.503812 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.503827 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.503850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.503863 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.513802 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.538020 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.570339 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"message\\\":\\\"emoval\\\\nI0216 12:32:06.549013 6091 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:06.549026 6091 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:06.549078 6091 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:32:06.549088 6091 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:32:06.549104 6091 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:06.549110 6091 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:06.549143 6091 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:06.549171 6091 factory.go:656] Stopping watch factory\\\\nI0216 12:32:06.549191 6091 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:06.549267 6091 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:06.549276 6091 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:06.549283 6091 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:06.549291 6091 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:32:06.549298 6091 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:32:06.549306 6091 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:06.549313 6091 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.587614 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.606080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.606180 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.606196 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.606217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.606230 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.608724 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.623698 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.640375 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.654416 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.669785 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.684366 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.697625 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:07Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.708495 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.708522 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.708532 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.708549 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.708561 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.812096 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.812200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.812225 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.812273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.812300 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.944855 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.944919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.944937 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.944965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:07 crc kubenswrapper[4799]: I0216 12:32:07.944982 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:07Z","lastTransitionTime":"2026-02-16T12:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.048693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.048742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.048750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.048768 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.048779 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.095550 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:34:00.908344368 +0000 UTC Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.151344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.151406 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.151415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.151624 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.151640 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.254198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.254243 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.254253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.254271 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.254281 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.357612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.357655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.357670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.357693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.357713 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.430051 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/0.log" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.434062 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.434290 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.452201 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.461266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.461311 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.461330 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.461352 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.461363 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.469239 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.488484 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.515163 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"message\\\":\\\"emoval\\\\nI0216 12:32:06.549013 6091 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:06.549026 6091 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:06.549078 6091 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:32:06.549088 6091 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:32:06.549104 6091 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:06.549110 6091 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:06.549143 6091 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:06.549171 6091 factory.go:656] Stopping watch factory\\\\nI0216 12:32:06.549191 6091 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:06.549267 6091 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:06.549276 6091 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:06.549283 6091 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:06.549291 6091 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:32:06.549298 6091 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:32:06.549306 6091 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:06.549313 6091 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.534642 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.550902 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.563969 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.564037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.564055 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.564097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.564116 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.565900 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.581928 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.599469 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.616829 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.638355 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.659388 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.666991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.667045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.667059 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.667080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.667095 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.674811 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.690907 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.771118 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.771713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.771921 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.772153 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.772385 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.876709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.876782 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.876799 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.876819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.876834 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.979646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.979714 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.979728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.979744 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:08 crc kubenswrapper[4799]: I0216 12:32:08.979756 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:08Z","lastTransitionTime":"2026-02-16T12:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.083178 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.083238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.083262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.083286 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.083304 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.095741 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:58:10.957634561 +0000 UTC Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.120057 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84"] Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.120606 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.124627 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.126120 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.146838 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.149053 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.149081 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:09 crc kubenswrapper[4799]: E0216 12:32:09.149277 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.149346 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:09 crc kubenswrapper[4799]: E0216 12:32:09.149497 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:09 crc kubenswrapper[4799]: E0216 12:32:09.149617 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.160327 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.176761 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.185935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.185965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.185975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.185993 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.186005 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.206580 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"message\\\":\\\"emoval\\\\nI0216 12:32:06.549013 6091 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:06.549026 6091 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:06.549078 6091 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:32:06.549088 6091 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:32:06.549104 6091 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:06.549110 6091 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:06.549143 6091 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:06.549171 6091 factory.go:656] Stopping watch factory\\\\nI0216 12:32:06.549191 6091 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:06.549267 6091 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:06.549276 6091 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:06.549283 6091 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:06.549291 6091 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:32:06.549298 6091 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:32:06.549306 6091 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:06.549313 6091 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.224145 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.239081 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.253549 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.254305 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.254459 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.254633 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.254773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6k5q\" (UniqueName: \"kubernetes.io/projected/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-kube-api-access-w6k5q\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.267965 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.283242 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.287919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.287974 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.287992 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.288016 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.288035 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.300334 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.313445 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.333018 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.351144 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.355441 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6k5q\" (UniqueName: \"kubernetes.io/projected/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-kube-api-access-w6k5q\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.355533 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.355568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.355639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.356620 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.356690 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.364759 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.369003 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.380880 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.381643 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6k5q\" (UniqueName: \"kubernetes.io/projected/2928b5d2-c9e0-4865-b99e-7aa13e3cdb66-kube-api-access-w6k5q\") pod \"ovnkube-control-plane-749d76644c-ddt84\" (UID: \"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.391219 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.391256 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.391269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.391289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.391304 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.435907 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.441467 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/1.log" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.443151 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/0.log" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.452183 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200" exitCode=1 Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.452425 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.452602 4799 scope.go:117] "RemoveContainer" containerID="716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.453688 4799 scope.go:117] "RemoveContainer" containerID="d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200" Feb 16 12:32:09 crc kubenswrapper[4799]: E0216 12:32:09.453880 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:09 crc kubenswrapper[4799]: W0216 12:32:09.461936 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2928b5d2_c9e0_4865_b99e_7aa13e3cdb66.slice/crio-da50af2699e07a695b595487d2f58931790d47bb649be2853cfdea9203abaab0 WatchSource:0}: Error finding container da50af2699e07a695b595487d2f58931790d47bb649be2853cfdea9203abaab0: Status 404 returned error can't find the container with id da50af2699e07a695b595487d2f58931790d47bb649be2853cfdea9203abaab0 Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.470243 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.484939 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.494731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.494764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.494772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.494787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.494799 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.511884 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.544147 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"message\\\":\\\"emoval\\\\nI0216 12:32:06.549013 6091 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:06.549026 6091 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:06.549078 6091 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:32:06.549088 6091 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:32:06.549104 6091 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:06.549110 6091 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:06.549143 6091 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:06.549171 6091 factory.go:656] Stopping watch factory\\\\nI0216 12:32:06.549191 6091 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:06.549267 6091 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:06.549276 6091 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:06.549283 6091 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:06.549291 6091 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:32:06.549298 6091 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:32:06.549306 6091 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:06.549313 6091 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.559582 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.576310 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.591337 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.598742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.598792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.598805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.598824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.598839 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.608796 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.622083 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.637759 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.654565 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.669737 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.686806 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.701295 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.702561 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.702592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.702601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.702616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.702628 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.717577 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.804926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.804980 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.804996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.805018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.805036 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.908060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.908119 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.908154 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.908176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:09 crc kubenswrapper[4799]: I0216 12:32:09.908192 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:09Z","lastTransitionTime":"2026-02-16T12:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.011268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.011337 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.011353 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.011379 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.011396 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.096296 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:06:03.488891208 +0000 UTC Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.114857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.114894 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.114903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.114919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.114930 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.218868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.218935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.218953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.218980 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.218999 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.322900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.322983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.323007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.323038 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.323059 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.426480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.426548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.426569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.426597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.426615 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.465540 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/1.log" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.478977 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" event={"ID":"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66","Type":"ContainerStarted","Data":"075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.479054 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" event={"ID":"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66","Type":"ContainerStarted","Data":"0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.479077 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" event={"ID":"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66","Type":"ContainerStarted","Data":"da50af2699e07a695b595487d2f58931790d47bb649be2853cfdea9203abaab0"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.502942 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.522101 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.528850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.528900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.528919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.528939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.528955 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.545404 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.579785 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"message\\\":\\\"emoval\\\\nI0216 12:32:06.549013 6091 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:06.549026 6091 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:06.549078 6091 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:32:06.549088 6091 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:32:06.549104 6091 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:06.549110 6091 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:06.549143 6091 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:06.549171 6091 factory.go:656] Stopping watch factory\\\\nI0216 12:32:06.549191 6091 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:06.549267 6091 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:06.549276 6091 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:06.549283 6091 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:06.549291 6091 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:32:06.549298 6091 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:32:06.549306 6091 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:06.549313 6091 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.598788 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.627792 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.631736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.631793 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.631808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.631828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.631849 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.650246 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.664212 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.683327 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.700676 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.718946 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.735188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.735241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.735255 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.735274 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.735288 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.738630 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.755913 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.770662 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.776074 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.776319 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:32:26.776289764 +0000 UTC m=+52.369305108 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.786744 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.837535 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.837576 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.837588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.837607 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.837621 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.877220 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.877400 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.877504 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:26.877479017 +0000 UTC m=+52.470494391 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.877659 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.877816 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.877829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.877977 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:26.87793859 +0000 UTC m=+52.470953964 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.878015 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878249 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878275 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878295 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878358 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:26.878341921 +0000 UTC m=+52.471357295 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878631 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878690 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878709 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:10 crc kubenswrapper[4799]: E0216 12:32:10.878799 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:26.878776674 +0000 UTC m=+52.471792018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.940600 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.940654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.940668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.940688 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:10 crc kubenswrapper[4799]: I0216 12:32:10.940751 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:10Z","lastTransitionTime":"2026-02-16T12:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.044151 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.044526 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.044620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.044719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.044813 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.097307 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:52:25.214599044 +0000 UTC Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.147791 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.147852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.147875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.147903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.147921 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.148341 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.148784 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.148748 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.148942 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.149256 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.149385 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.250465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.250544 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.250644 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.250680 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.250700 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.353724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.353782 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.353801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.353822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.353835 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.454092 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2clkm"] Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.455069 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.455243 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.456935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.457008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.457036 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.457075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.457100 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.480037 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.510499 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.538300 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://716bb7f203eaa56eab9deca18dec2d50822bfa1967b178731eb383e0c47ec70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"message\\\":\\\"emoval\\\\nI0216 12:32:06.549013 6091 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:06.549026 6091 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:06.549078 6091 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:32:06.549088 6091 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:32:06.549104 6091 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:06.549110 6091 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:06.549143 6091 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:06.549171 6091 factory.go:656] Stopping watch factory\\\\nI0216 12:32:06.549191 6091 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:06.549267 6091 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:06.549276 6091 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:06.549283 6091 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:06.549291 6091 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:32:06.549298 6091 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:32:06.549306 6091 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:06.549313 6091 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.553872 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.559512 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.559571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.559592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.559620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.559639 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.574609 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.576741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.576814 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.576834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.576863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.576884 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.586759 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxkfk\" (UniqueName: \"kubernetes.io/projected/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-kube-api-access-hxkfk\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.587021 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.594337 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.597418 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.599632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.599693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.599715 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.599742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.599765 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.613111 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.616119 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.620865 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.621020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.621102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.621212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.621283 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.632560 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.640448 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.644835 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.644871 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.644885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.644907 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.644922 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.646970 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.657782 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.661404 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.661485 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.661508 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.661540 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.661563 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.662397 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.679365 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.684221 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.684463 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.686468 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.686509 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.686527 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.686551 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.686569 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.687712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxkfk\" (UniqueName: \"kubernetes.io/projected/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-kube-api-access-hxkfk\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.687821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.688000 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.688074 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:32:12.188050759 +0000 UTC m=+37.781066133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.696615 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.697344 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.698019 4799 scope.go:117] "RemoveContainer" containerID="d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200" Feb 16 12:32:11 crc kubenswrapper[4799]: E0216 12:32:11.698387 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.716353 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxkfk\" (UniqueName: \"kubernetes.io/projected/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-kube-api-access-hxkfk\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.716524 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.730543 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.743276 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.758314 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.769644 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.784352 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.789081 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.789174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.789193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.789214 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.789230 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.801329 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.823655 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.840313 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.858004 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.875536 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.892598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.892661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.892698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.892730 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.892751 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.896537 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.909391 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.926394 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.951016 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.974644 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.992695 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.996237 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.996307 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.996335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.996369 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:11 crc kubenswrapper[4799]: I0216 12:32:11.996395 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:11Z","lastTransitionTime":"2026-02-16T12:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.009166 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:12Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.026256 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:12Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.061301 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:12Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.097662 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:04:01.891267722 +0000 UTC Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.098823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.098860 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.098868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.098885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.098897 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.198648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:12 crc kubenswrapper[4799]: E0216 12:32:12.198830 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:12 crc kubenswrapper[4799]: E0216 12:32:12.198933 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:32:13.198913412 +0000 UTC m=+38.791928736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.200606 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.200658 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.200676 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.200699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.200717 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.303733 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.303803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.303819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.303846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.303865 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.406529 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.406571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.406580 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.406593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.406603 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.508947 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.508985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.508994 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.509007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.509019 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.611526 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.611575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.611586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.611602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.611612 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.715289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.715377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.715403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.715439 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.715462 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.818642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.818690 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.818705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.818728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.818745 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.921737 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.921810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.921828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.921853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:12 crc kubenswrapper[4799]: I0216 12:32:12.921870 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:12Z","lastTransitionTime":"2026-02-16T12:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.024449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.024508 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.024524 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.024547 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.024603 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.097816 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:51:19.46651964 +0000 UTC Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.128037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.128086 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.128095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.128109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.128119 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.148292 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.148334 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.148419 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:13 crc kubenswrapper[4799]: E0216 12:32:13.148546 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.148978 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:13 crc kubenswrapper[4799]: E0216 12:32:13.149088 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:13 crc kubenswrapper[4799]: E0216 12:32:13.149198 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:13 crc kubenswrapper[4799]: E0216 12:32:13.149278 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.210898 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:13 crc kubenswrapper[4799]: E0216 12:32:13.211092 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:13 crc kubenswrapper[4799]: E0216 12:32:13.211186 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:32:15.21116431 +0000 UTC m=+40.804179664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.230453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.230500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.230513 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.230535 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.230546 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.332777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.332822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.332834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.332850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.332861 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.435868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.435914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.435931 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.435952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.435968 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.539242 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.539304 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.539320 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.539342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.539359 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.644014 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.644092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.644114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.644181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.644201 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.747343 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.747410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.747469 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.747497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.747515 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.850661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.850739 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.850765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.850798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.850823 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.954879 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.954962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.954984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.955025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:13 crc kubenswrapper[4799]: I0216 12:32:13.955046 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:13Z","lastTransitionTime":"2026-02-16T12:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.059511 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.059587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.059603 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.059638 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.059652 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.098208 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:18:59.906291733 +0000 UTC Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.162881 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.162973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.162991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.163020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.163037 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.266233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.266316 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.266333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.266354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.266369 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.369925 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.369975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.369988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.370033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.370049 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.473465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.473528 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.473541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.473564 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.473578 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.576246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.576308 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.576327 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.576374 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.576394 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.678915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.678962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.678975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.678991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.679004 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.781405 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.781461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.781474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.781494 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.781507 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.884443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.884510 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.884518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.884533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.884543 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.987218 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.987290 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.987309 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.987335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:14 crc kubenswrapper[4799]: I0216 12:32:14.987353 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:14Z","lastTransitionTime":"2026-02-16T12:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.090822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.090895 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.090911 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.090933 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.090951 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.099022 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:46:22.536673846 +0000 UTC Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.148886 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.148938 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.149017 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:15 crc kubenswrapper[4799]: E0216 12:32:15.149208 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.149496 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:15 crc kubenswrapper[4799]: E0216 12:32:15.149606 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:15 crc kubenswrapper[4799]: E0216 12:32:15.149700 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:15 crc kubenswrapper[4799]: E0216 12:32:15.149808 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.167022 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.185593 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.193805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.193860 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.193873 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.193894 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.193909 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.204283 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.221631 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.236821 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.239263 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:15 crc kubenswrapper[4799]: E0216 12:32:15.239449 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:15 crc kubenswrapper[4799]: E0216 12:32:15.239538 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:32:19.239514262 +0000 UTC m=+44.832529596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.252506 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.266793 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.283571 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.296518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.296551 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.296560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.296575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.296587 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.300894 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.352975 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.374512 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.388843 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.399047 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.399114 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.399167 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.399201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.399223 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.406410 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.424092 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.437331 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.455658 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.501022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.501052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.501093 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.501106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.501116 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.604472 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.604548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.604556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.604571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.604582 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.707488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.707550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.707569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.707603 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.707630 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.810458 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.810549 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.810577 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.810611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.810633 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.913234 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.913618 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.913759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.913893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:15 crc kubenswrapper[4799]: I0216 12:32:15.914019 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:15Z","lastTransitionTime":"2026-02-16T12:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.017414 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.017887 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.018117 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.018297 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.018428 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.099449 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:09:24.442517783 +0000 UTC Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.122628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.122687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.122705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.122732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.122751 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.226065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.226203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.226235 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.226267 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.226289 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.329965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.330022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.330043 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.330067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.330085 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.433725 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.433801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.433842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.433883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.433908 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.538054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.538208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.538334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.538366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.538385 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.642003 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.642115 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.642176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.642205 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.642233 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.746273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.746333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.746351 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.746375 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.746395 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.850088 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.850500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.850690 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.850899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.851111 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.954798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.954865 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.954883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.954907 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:16 crc kubenswrapper[4799]: I0216 12:32:16.954925 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:16Z","lastTransitionTime":"2026-02-16T12:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.058058 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.058210 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.058242 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.058276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.058303 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.100173 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:48:04.285135467 +0000 UTC Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.148692 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.148712 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.148860 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:17 crc kubenswrapper[4799]: E0216 12:32:17.149017 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.149354 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:17 crc kubenswrapper[4799]: E0216 12:32:17.149536 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:17 crc kubenswrapper[4799]: E0216 12:32:17.149729 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:17 crc kubenswrapper[4799]: E0216 12:32:17.149990 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.176064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.176161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.176182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.176207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.176225 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.279747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.279812 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.279828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.279899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.279921 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.382592 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.382679 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.382704 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.382736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.382759 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.485809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.485885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.485905 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.485932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.485950 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.589611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.589657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.589668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.589683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.589696 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.693253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.693319 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.693339 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.693366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.693387 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.796571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.796612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.796625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.796641 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.796654 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.900755 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.900798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.900811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.900827 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:17 crc kubenswrapper[4799]: I0216 12:32:17.900839 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:17Z","lastTransitionTime":"2026-02-16T12:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.004720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.004786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.004810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.004843 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.004870 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.101351 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:47:46.846583575 +0000 UTC Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.108269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.108329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.108357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.108390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.108417 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.211325 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.211430 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.211453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.211479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.211497 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.315457 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.315541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.315566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.315593 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.315611 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.418757 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.418830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.418850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.418878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.418898 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.521868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.521942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.521962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.521992 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.522011 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.624609 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.624672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.624691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.624716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.624736 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.728601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.728687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.728706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.728731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.728748 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.832240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.832305 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.832314 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.832329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.832357 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.934975 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.935031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.935042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.935059 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:18 crc kubenswrapper[4799]: I0216 12:32:18.935070 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:18Z","lastTransitionTime":"2026-02-16T12:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.038109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.038166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.038177 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.038193 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.038204 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.102014 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:21:16.930202076 +0000 UTC Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.141402 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.141458 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.141471 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.141489 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.141502 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.149303 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.149346 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.149346 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:19 crc kubenswrapper[4799]: E0216 12:32:19.149484 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.149512 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:19 crc kubenswrapper[4799]: E0216 12:32:19.149761 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:19 crc kubenswrapper[4799]: E0216 12:32:19.149798 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:19 crc kubenswrapper[4799]: E0216 12:32:19.149903 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.248291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.248765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.248941 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.249182 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.249407 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.302625 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:19 crc kubenswrapper[4799]: E0216 12:32:19.302822 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:19 crc kubenswrapper[4799]: E0216 12:32:19.303263 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:32:27.303225759 +0000 UTC m=+52.896241303 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.351961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.352012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.352026 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.352045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.352059 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.454012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.454056 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.454070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.454087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.454098 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.556333 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.556390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.556410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.556433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.556457 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.659628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.659673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.659681 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.659699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.659709 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.763144 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.763192 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.763201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.763217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.763227 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.865730 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.865777 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.865787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.865805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.865818 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.968543 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.968613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.968632 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.968657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:19 crc kubenswrapper[4799]: I0216 12:32:19.968676 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:19Z","lastTransitionTime":"2026-02-16T12:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.071208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.071258 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.071269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.071288 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.071302 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.102660 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:32:19.105320666 +0000 UTC Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.174203 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.174284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.174307 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.174335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.174355 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.276639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.276687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.276699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.276714 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.276725 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.379984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.380021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.380030 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.380044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.380055 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.482705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.482749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.482759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.482775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.482785 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.585922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.585985 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.586008 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.586036 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.586201 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.688602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.688630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.688638 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.688652 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.688662 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.791929 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.792779 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.792958 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.793111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.793316 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.897062 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.897166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.897186 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.897212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:20 crc kubenswrapper[4799]: I0216 12:32:20.897228 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:20Z","lastTransitionTime":"2026-02-16T12:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.000505 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.000588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.000614 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.000653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.000680 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.103008 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:49:57.476847781 +0000 UTC Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.104885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.104934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.104952 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.104980 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.104999 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.148776 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.148828 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.148917 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:21 crc kubenswrapper[4799]: E0216 12:32:21.148950 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:21 crc kubenswrapper[4799]: E0216 12:32:21.149173 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.149168 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:21 crc kubenswrapper[4799]: E0216 12:32:21.149231 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:21 crc kubenswrapper[4799]: E0216 12:32:21.149318 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.208540 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.208586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.208597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.208617 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.208628 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.312066 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.312575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.312706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.312867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.312987 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.416766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.417257 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.417395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.417502 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.417582 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.520554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.520625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.520656 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.520695 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.520719 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.624968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.625054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.625072 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.625101 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.625167 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.728305 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.728405 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.728427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.728457 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.728475 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.831949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.832328 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.832518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.832712 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.832894 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.937819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.938407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.938655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.938910 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:21 crc kubenswrapper[4799]: I0216 12:32:21.939200 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:21Z","lastTransitionTime":"2026-02-16T12:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.014026 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.014113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.014167 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.014196 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.014215 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: E0216 12:32:22.037570 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.044711 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.044776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.044797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.044870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.044891 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: E0216 12:32:22.066959 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.071844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.071874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.071885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.071905 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.071918 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: E0216 12:32:22.086544 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.091509 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.091634 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.091653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.091699 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.091717 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.104072 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:17:48.243744999 +0000 UTC Feb 16 12:32:22 crc kubenswrapper[4799]: E0216 12:32:22.106219 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.111621 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.111797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.111919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.112076 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.112262 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: E0216 12:32:22.132912 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:22 crc kubenswrapper[4799]: E0216 12:32:22.133546 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.136480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.136642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.136670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.136716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.136760 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.241410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.241465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.241482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.241507 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.241525 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.344897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.344950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.344967 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.344996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.345014 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.448275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.448662 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.448796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.449030 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.449403 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.553100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.553210 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.553233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.553265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.553286 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.657090 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.657605 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.657853 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.658098 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.658416 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.762673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.762730 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.762748 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.762774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.762794 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.865698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.865752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.865770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.865797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.865821 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.969295 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.969370 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.969388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.969415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:22 crc kubenswrapper[4799]: I0216 12:32:22.969434 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:22Z","lastTransitionTime":"2026-02-16T12:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.072823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.072916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.072943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.072979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.073002 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.105294 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:51:33.703938826 +0000 UTC Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.148999 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.149037 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.149278 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:23 crc kubenswrapper[4799]: E0216 12:32:23.149287 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.149327 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:23 crc kubenswrapper[4799]: E0216 12:32:23.149480 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:23 crc kubenswrapper[4799]: E0216 12:32:23.149607 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:23 crc kubenswrapper[4799]: E0216 12:32:23.149714 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.177439 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.177493 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.177504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.177521 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.177540 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.280143 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.280194 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.280204 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.280219 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.280230 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.382724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.382770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.382780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.382795 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.382806 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.486273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.486317 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.486326 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.486342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.486354 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.589071 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.589106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.589116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.589146 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.589156 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.692103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.692180 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.692195 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.692220 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.692235 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.795449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.795504 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.795517 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.795540 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.795553 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.899196 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.899247 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.899264 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.899285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:23 crc kubenswrapper[4799]: I0216 12:32:23.899300 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:23Z","lastTransitionTime":"2026-02-16T12:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.002394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.002453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.002471 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.002497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.002515 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.105486 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:05:05.507087853 +0000 UTC Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.106510 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.106556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.106573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.106595 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.106610 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.209348 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.209424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.209444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.209475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.209495 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.312787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.312870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.312893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.312924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.312946 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.417053 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.417158 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.417184 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.417217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.417236 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.521519 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.521920 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.522018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.522113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.522229 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.625729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.625883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.625916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.625953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.625982 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.730366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.730430 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.730448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.730474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.730493 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.835377 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.835445 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.835470 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.835500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.835522 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.939488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.939571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.939595 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.939625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:24 crc kubenswrapper[4799]: I0216 12:32:24.939652 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:24Z","lastTransitionTime":"2026-02-16T12:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.043680 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.044209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.044464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.044696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.044917 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.106095 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:40:26.675554912 +0000 UTC Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.148443 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.148487 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.148508 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.148634 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:25 crc kubenswrapper[4799]: E0216 12:32:25.148628 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:25 crc kubenswrapper[4799]: E0216 12:32:25.148762 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:25 crc kubenswrapper[4799]: E0216 12:32:25.148885 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.148900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: E0216 12:32:25.148970 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.148990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.149021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.149061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.149091 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.151148 4799 scope.go:117] "RemoveContainer" containerID="d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.172374 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.191114 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.212197 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.229816 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.246833 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.251501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.251541 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.251552 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.251568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.251580 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.270277 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.286376 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.299446 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.313156 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.325154 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.334728 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.345794 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.354482 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.354510 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.354519 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.354536 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.354548 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.359000 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.372158 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.385712 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.397878 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.456269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.456322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.456334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.456358 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.456374 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.545454 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/1.log" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.548903 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.549452 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.559752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.559837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.559857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.559891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.559923 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.565675 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.578951 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.597967 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.618329 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.660814 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.662342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.662418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.662447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.662476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.662495 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.678293 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.694440 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.720287 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.740475 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.751937 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.761063 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.764630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.764666 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.764677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.764693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.764705 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.772389 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.787308 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.805637 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.817388 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.829104 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.867005 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.867049 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.867061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.867079 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.867090 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.969935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.969982 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.969994 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.970012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:25 crc kubenswrapper[4799]: I0216 12:32:25.970023 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:25Z","lastTransitionTime":"2026-02-16T12:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.072364 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.072464 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.072488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.072520 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.072542 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.107954 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:16:39.006008686 +0000 UTC Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.175646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.175730 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.175754 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.175787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.175809 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.279642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.279710 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.279729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.279759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.279775 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.383643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.383721 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.383738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.383767 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.383786 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.486395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.486465 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.486476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.486492 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.486502 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.555275 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/2.log" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.555892 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/1.log" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.558880 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8" exitCode=1 Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.558919 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.558958 4799 scope.go:117] "RemoveContainer" containerID="d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.560585 4799 scope.go:117] "RemoveContainer" containerID="9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8" Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.560966 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.581799 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.589032 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.589085 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.589095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.589111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.589139 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.600928 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.619752 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.639503 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.659380 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.685423 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a6ea595fc93739bf885ceaf473b2d3266c5312b50ddebd0a0b75eceb2d1200\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:08Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:32:08.453881 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:08.453937 6252 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:08.453958 6252 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:32:08.453972 6252 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:08.453976 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:08.454005 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:32:08.454005 6252 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:08.454021 6252 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:08.454047 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:08.454053 6252 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:08.454055 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:08.454081 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:08.454101 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:08.454118 6252 factory.go:656] Stopping watch factory\\\\nI0216 12:32:08.454153 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.691815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.691869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.691880 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.691900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.691919 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.704420 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.725291 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.746258 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.761377 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.783723 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.790990 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.791377 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:32:58.791340142 +0000 UTC m=+84.384355486 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.795501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.795701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.795876 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.796046 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.796389 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.799065 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.815867 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.837508 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.856924 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.876102 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.892745 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.892814 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.892879 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.892922 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893110 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893172 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893190 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893195 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893267 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:58.893242465 +0000 UTC m=+84.486257809 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893295 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:58.893280446 +0000 UTC m=+84.486295800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893415 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893438 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893450 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893500 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:58.893484752 +0000 UTC m=+84.486500096 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893144 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: E0216 12:32:26.893746 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:32:58.893731689 +0000 UTC m=+84.486747033 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.899787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.899856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.899869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.899893 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:26 crc kubenswrapper[4799]: I0216 12:32:26.899908 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:26Z","lastTransitionTime":"2026-02-16T12:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.002996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.003070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.003088 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.003115 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.003163 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.105983 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.106019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.106030 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.106046 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.106058 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.108426 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:18:04.819557474 +0000 UTC Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.148364 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.148459 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.148487 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.148499 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.148604 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.148800 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.148956 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.149179 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.208968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.209019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.209030 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.209046 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.209056 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.312217 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.312269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.312281 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.312295 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.312306 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.398415 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.398651 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.398796 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:32:43.398760837 +0000 UTC m=+68.991776201 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.415769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.415825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.415840 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.415861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.415879 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.518898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.518959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.518978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.519001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.519019 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.565754 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/2.log" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.572775 4799 scope.go:117] "RemoveContainer" containerID="9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8" Feb 16 12:32:27 crc kubenswrapper[4799]: E0216 12:32:27.573115 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.596101 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.606810 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.618974 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.622349 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.623922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.623977 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.623996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.624023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.624043 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.640567 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.656478 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.679955 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.707434 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.727744 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.727826 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.727846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.727886 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.727910 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.730424 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.751683 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.775275 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.810094 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.830713 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.830765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.830783 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.830806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.830825 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.836086 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.889003 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.907416 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.924873 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.932876 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.932898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.932909 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.932922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.932935 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:27Z","lastTransitionTime":"2026-02-16T12:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.941642 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.953562 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.968776 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.988615 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:27 crc kubenswrapper[4799]: I0216 12:32:27.999253 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.020235 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.036413 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.036463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.036473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.036485 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.036495 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.040835 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.052314 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.067311 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.087002 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.103692 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.109533 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:45:21.901351606 +0000 UTC Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.117943 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.130332 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.139276 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.139336 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.139350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.139369 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.139384 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.149896 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.164740 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.187042 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.207891 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.227073 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.242360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.242474 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.242500 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.242537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.242569 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.249480 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.345819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.345913 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.345937 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.346010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.346040 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.449087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.449161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.449174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.449187 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.449197 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.552563 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.552628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.552643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.552663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.552677 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.655613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.655656 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.655668 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.655683 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.655695 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.757984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.758037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.758048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.758070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.758086 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.860438 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.860495 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.860507 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.860526 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.860542 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.963661 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.963732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.963756 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.963790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:28 crc kubenswrapper[4799]: I0216 12:32:28.963816 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:28Z","lastTransitionTime":"2026-02-16T12:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.067317 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.067382 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.067401 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.067430 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.067450 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.109768 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:50:06.251800833 +0000 UTC Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.148777 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.148883 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.148947 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:29 crc kubenswrapper[4799]: E0216 12:32:29.148981 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.148901 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:29 crc kubenswrapper[4799]: E0216 12:32:29.149184 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:29 crc kubenswrapper[4799]: E0216 12:32:29.149304 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:29 crc kubenswrapper[4799]: E0216 12:32:29.149472 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.169894 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.169945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.169955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.169972 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.169984 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.273476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.273530 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.273545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.273567 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.273582 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.377045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.377102 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.377148 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.377171 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.377186 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.480319 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.480365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.480374 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.480394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.480405 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.582808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.582886 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.582906 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.582932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.582951 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.686238 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.686294 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.686319 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.686344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.686361 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.789820 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.789902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.789927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.789960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.789986 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.893522 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.893608 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.893636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.893672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.893694 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.996806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.996858 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.996870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.996889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:29 crc kubenswrapper[4799]: I0216 12:32:29.996901 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:29Z","lastTransitionTime":"2026-02-16T12:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.100188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.100280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.100308 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.100344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.100375 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.110553 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:22:25.807486567 +0000 UTC Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.203337 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.203399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.203418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.203442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.203463 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.306306 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.306374 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.306393 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.306420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.306447 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.409706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.409774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.409792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.409816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.409833 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.512687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.512761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.512780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.512804 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.512823 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.615805 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.615833 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.615842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.615857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.615867 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.719246 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.719313 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.719331 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.719354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.719375 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.821748 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.821809 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.821827 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.821852 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.821872 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.924964 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.925005 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.925016 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.925033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:30 crc kubenswrapper[4799]: I0216 12:32:30.925045 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:30Z","lastTransitionTime":"2026-02-16T12:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.028690 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.029082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.029304 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.029510 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.029695 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.110783 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:07:15.416377979 +0000 UTC Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.133032 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.133293 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.133468 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.133648 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.133846 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.148699 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.148712 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.148755 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.148809 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:31 crc kubenswrapper[4799]: E0216 12:32:31.148938 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:31 crc kubenswrapper[4799]: E0216 12:32:31.149035 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:31 crc kubenswrapper[4799]: E0216 12:32:31.149183 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:31 crc kubenswrapper[4799]: E0216 12:32:31.149315 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.238017 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.238062 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.238077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.238099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.238114 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.341316 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.341372 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.341390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.341414 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.341432 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.443688 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.443736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.443747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.443761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.443770 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.546744 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.546788 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.546797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.546811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.546821 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.648763 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.648836 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.648854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.648885 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.648903 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.750891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.750940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.750961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.750978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.750990 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.853806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.853861 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.853878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.853905 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.853924 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.956837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.956913 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.956932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.956959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:31 crc kubenswrapper[4799]: I0216 12:32:31.956978 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:31Z","lastTransitionTime":"2026-02-16T12:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.059739 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.059787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.059799 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.059820 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.059832 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.111441 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:47:22.598556112 +0000 UTC Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.163297 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.163349 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.163363 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.163380 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.163398 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.266819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.266859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.266867 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.266882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.266896 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.369825 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.369889 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.369907 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.369932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.369950 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.372113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.372223 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.372248 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.372335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.372370 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: E0216 12:32:32.388792 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.394105 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.394175 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.394186 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.394209 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.394223 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: E0216 12:32:32.410161 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.414336 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.414399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.414417 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.414435 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.414471 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: E0216 12:32:32.430666 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.437137 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.437183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.437196 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.437216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.437229 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: E0216 12:32:32.451881 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.463230 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.463292 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.463307 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.463331 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.463348 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: E0216 12:32:32.478157 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:32 crc kubenswrapper[4799]: E0216 12:32:32.478306 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.479951 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.480080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.480262 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.480394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.480526 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.583368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.583401 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.583411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.583424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.583434 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.685785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.685822 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.685834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.685847 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.685856 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.788383 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.788419 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.788427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.788441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.788451 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.891421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.891467 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.891477 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.891493 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.891504 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.994275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.994335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.994345 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.994365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:32 crc kubenswrapper[4799]: I0216 12:32:32.994376 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:32Z","lastTransitionTime":"2026-02-16T12:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.096995 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.097080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.097106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.097177 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.097200 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.112240 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:42:40.361390067 +0000 UTC Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.148797 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.148814 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.148814 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:33 crc kubenswrapper[4799]: E0216 12:32:33.149052 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.148835 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:33 crc kubenswrapper[4799]: E0216 12:32:33.149163 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:33 crc kubenswrapper[4799]: E0216 12:32:33.149288 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:33 crc kubenswrapper[4799]: E0216 12:32:33.149343 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.199883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.199923 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.199935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.199949 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.199962 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.302231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.302295 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.302307 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.302322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.302334 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.404887 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.405189 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.405301 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.405386 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.405492 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.508461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.508516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.508525 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.508540 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.508549 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.611358 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.611399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.611410 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.611424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.611432 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.713801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.713878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.713902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.713927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.713949 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.816606 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.816655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.816670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.816689 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.816702 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.920071 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.920145 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.920160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.920179 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:33 crc kubenswrapper[4799]: I0216 12:32:33.920192 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:33Z","lastTransitionTime":"2026-02-16T12:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.023368 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.023415 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.023429 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.023447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.023462 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.112830 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 18:18:59.026519018 +0000 UTC Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.125660 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.125703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.125716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.125735 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.125752 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.228835 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.228928 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.228950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.228977 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.228996 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.331746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.331824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.331848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.331869 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.331884 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.434766 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.434815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.434829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.434848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.434864 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.537473 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.537529 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.537547 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.537570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.537586 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.640390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.640434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.640445 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.640477 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.640489 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.743818 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.743864 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.743882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.743903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.743919 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.846554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.846598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.846610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.846627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.846638 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.949706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.949765 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.949791 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.949814 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:34 crc kubenswrapper[4799]: I0216 12:32:34.949829 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:34Z","lastTransitionTime":"2026-02-16T12:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.052566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.052633 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.052646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.052663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.052677 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.113088 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:12:38.393654348 +0000 UTC Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.148359 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.148372 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:35 crc kubenswrapper[4799]: E0216 12:32:35.148809 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.148536 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:35 crc kubenswrapper[4799]: E0216 12:32:35.149039 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:35 crc kubenswrapper[4799]: E0216 12:32:35.148851 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.148500 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:35 crc kubenswrapper[4799]: E0216 12:32:35.149480 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.156289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.156330 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.156341 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.156356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.156367 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.165438 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.181563 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.196081 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.209692 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.227259 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.245573 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.259715 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.259803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.259865 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.259875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.259912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.259928 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.279365 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.295130 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.310199 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.327437 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.340238 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.354924 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.362942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.363028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.363039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.363110 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.363154 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.368484 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.383653 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.400594 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.412429 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:35Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.467514 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.467968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.467981 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.467999 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.468013 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.571706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.571756 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.571768 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.571787 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.571798 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.674559 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.674639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.674657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.674685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.674705 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.777387 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.777452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.777498 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.777522 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.777535 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.879832 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.879887 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.879898 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.879922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.879937 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.984483 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.984553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.984566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.984611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:35 crc kubenswrapper[4799]: I0216 12:32:35.984628 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:35Z","lastTransitionTime":"2026-02-16T12:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.088614 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.088671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.088684 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.088718 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.088732 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.113290 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:56:06.240120863 +0000 UTC Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.191750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.191808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.191821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.191842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.191858 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.295076 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.295116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.295149 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.295166 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.295181 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.398940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.399007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.399031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.399063 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.399085 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.502878 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.502935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.502953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.502979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.502997 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.605685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.605775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.605792 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.605821 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.605860 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.709165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.709242 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.709255 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.709275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.709290 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.813180 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.813258 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.813279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.813309 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.813328 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.917365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.917425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.917444 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.917476 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:36 crc kubenswrapper[4799]: I0216 12:32:36.917496 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:36Z","lastTransitionTime":"2026-02-16T12:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.021568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.021666 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.021706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.021743 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.021769 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.113523 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:46:16.436008356 +0000 UTC Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.125568 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.125635 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.125651 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.125672 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.125690 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.149222 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.149266 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.149380 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:37 crc kubenswrapper[4799]: E0216 12:32:37.149500 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.149549 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:37 crc kubenswrapper[4799]: E0216 12:32:37.149843 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:37 crc kubenswrapper[4799]: E0216 12:32:37.149926 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:37 crc kubenswrapper[4799]: E0216 12:32:37.150050 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.228383 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.228450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.228471 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.228498 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.228518 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.331808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.331883 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.331906 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.331939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.331965 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.436945 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.437009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.437039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.437064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.437082 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.540664 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.540734 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.540752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.540780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.540800 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.644626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.644677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.644693 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.644715 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.644731 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.748571 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.748639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.748663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.748696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.748718 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.851797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.851833 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.851842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.851856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.851866 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.956095 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.956160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.956174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.956200 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:37 crc kubenswrapper[4799]: I0216 12:32:37.956220 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:37Z","lastTransitionTime":"2026-02-16T12:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.059196 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.059278 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.059317 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.059355 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.059377 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.114368 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:45:28.040998451 +0000 UTC Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.163015 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.163089 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.163111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.163168 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.163193 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.267404 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.267461 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.267478 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.267553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.267572 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.372268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.372319 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.372335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.372362 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.372376 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.475875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.475957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.475976 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.476006 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.476026 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.578597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.578660 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.578677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.578701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.578718 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.680664 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.680710 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.680722 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.680738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.680750 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.783940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.784002 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.784015 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.784037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.784053 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.887145 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.887201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.887216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.887236 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.887253 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.991020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.991093 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.991111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.991156 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:38 crc kubenswrapper[4799]: I0216 12:32:38.991172 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:38Z","lastTransitionTime":"2026-02-16T12:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.094212 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.094272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.094291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.094316 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.094331 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.114841 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:32:10.663340305 +0000 UTC Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.148713 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.148782 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.148908 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.148914 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:39 crc kubenswrapper[4799]: E0216 12:32:39.149027 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:39 crc kubenswrapper[4799]: E0216 12:32:39.149179 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:39 crc kubenswrapper[4799]: E0216 12:32:39.149611 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:39 crc kubenswrapper[4799]: E0216 12:32:39.149698 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.149885 4799 scope.go:117] "RemoveContainer" containerID="9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8" Feb 16 12:32:39 crc kubenswrapper[4799]: E0216 12:32:39.150252 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.196553 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.196599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.196611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.196628 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.196642 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.299844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.299899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.299915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.299939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.299955 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.403682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.403733 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.403742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.403759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.403768 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.506784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.506832 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.506842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.506860 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.506871 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.610233 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.610289 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.610305 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.610326 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.610342 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.713326 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.713399 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.713417 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.713457 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.713496 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.817181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.817273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.817296 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.817331 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.817351 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.920746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.920780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.920790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.920803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:39 crc kubenswrapper[4799]: I0216 12:32:39.920812 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:39Z","lastTransitionTime":"2026-02-16T12:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.023900 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.023950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.023965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.023990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.024012 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.115144 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:12:33.713070697 +0000 UTC Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.127501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.127549 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.127563 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.127583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.127596 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.230544 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.230680 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.230742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.230776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.230832 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.333973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.334052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.334078 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.334113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.334167 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.438161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.438578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.438594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.438613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.438655 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.542350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.542434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.542456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.542493 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.542518 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.645488 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.645564 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.645582 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.645610 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.645629 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.749912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.749978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.749996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.750024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.750046 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.853207 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.853261 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.853274 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.853294 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.853308 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.956830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.956890 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.956902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.956924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:40 crc kubenswrapper[4799]: I0216 12:32:40.956940 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:40Z","lastTransitionTime":"2026-02-16T12:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.066358 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.066421 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.066436 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.066463 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.066481 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.115860 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:13:08.08288337 +0000 UTC Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.148677 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.148850 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:41 crc kubenswrapper[4799]: E0216 12:32:41.148907 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.148927 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.148864 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:41 crc kubenswrapper[4799]: E0216 12:32:41.149044 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:41 crc kubenswrapper[4799]: E0216 12:32:41.149230 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:41 crc kubenswrapper[4799]: E0216 12:32:41.149357 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.170560 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.170669 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.170690 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.170715 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.170739 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.274116 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.274188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.274202 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.274224 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.274237 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.377841 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.377899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.377914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.377936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.377949 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.480535 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.480588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.480599 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.480616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.480628 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.583939 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.583999 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.584015 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.584052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.584066 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.686908 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.686957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.686968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.686984 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.686993 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.789643 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.789694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.789705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.789725 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.789737 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.893336 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.893414 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.893428 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.893455 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.893473 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.996503 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.996545 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.996556 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.996570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:41 crc kubenswrapper[4799]: I0216 12:32:41.996582 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:41Z","lastTransitionTime":"2026-02-16T12:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.098655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.098707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.098716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.098729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.098740 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.116232 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:15:34.201616659 +0000 UTC Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.202173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.202226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.202237 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.202254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.202266 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.304709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.304769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.304785 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.304808 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.304827 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.407001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.407052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.407065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.407082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.407094 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.509039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.509115 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.509160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.509181 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.509194 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.578966 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.579018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.579028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.579044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.579055 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: E0216 12:32:42.593186 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.596484 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.596533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.596547 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.596566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.596581 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: E0216 12:32:42.613579 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.617915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.617960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.617973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.617991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.618004 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: E0216 12:32:42.631696 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.636000 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.636068 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.636092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.636155 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.636183 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: E0216 12:32:42.650611 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.654520 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.654588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.654602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.654627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.654647 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: E0216 12:32:42.666813 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:42 crc kubenswrapper[4799]: E0216 12:32:42.667052 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.669366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.669433 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.669452 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.669480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.669551 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.771810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.771854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.771866 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.771880 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.771892 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.873715 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.873778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.873788 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.873802 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.873813 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.975366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.975408 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.975422 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.975441 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:42 crc kubenswrapper[4799]: I0216 12:32:42.975453 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:42Z","lastTransitionTime":"2026-02-16T12:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.077725 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.077781 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.077797 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.077820 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.077833 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.116929 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:33:36.680685823 +0000 UTC Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.148380 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.148467 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:43 crc kubenswrapper[4799]: E0216 12:32:43.148584 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.148602 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.148492 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:43 crc kubenswrapper[4799]: E0216 12:32:43.149051 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:43 crc kubenswrapper[4799]: E0216 12:32:43.149165 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:43 crc kubenswrapper[4799]: E0216 12:32:43.149169 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.179978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.180031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.180044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.180063 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.180101 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.283205 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.283249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.283261 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.283277 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.283288 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.386048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.386188 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.386211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.386240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.386259 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.481061 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:43 crc kubenswrapper[4799]: E0216 12:32:43.481458 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:43 crc kubenswrapper[4799]: E0216 12:32:43.481865 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:33:15.481835681 +0000 UTC m=+101.074851055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.494739 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.494842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.494874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.494912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.494952 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.597372 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.597420 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.597432 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.597447 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.597457 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.701067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.701173 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.701194 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.701226 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.701252 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.804794 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.804870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.804891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.804919 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.804938 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.908673 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.908729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.908748 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.908773 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:43 crc kubenswrapper[4799]: I0216 12:32:43.908792 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:43Z","lastTransitionTime":"2026-02-16T12:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.011759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.012214 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.012314 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.012424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.012514 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.116705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.116770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.116789 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.116816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.116837 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.117058 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:24:17.092363979 +0000 UTC Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.220097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.220229 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.220253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.220285 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.220311 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.323884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.323917 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.323928 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.323943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.323953 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.427609 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.427707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.427741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.427778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.427802 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.530617 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.530686 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.530704 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.530732 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.530752 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.631736 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/0.log" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632313 4799 generic.go:334] "Generic (PLEG): container finished" podID="ff442c08-09db-4354-b9be-b43956019ba7" containerID="be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d" exitCode=1 Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632408 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632413 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerDied","Data":"be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632800 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632941 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632961 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.632974 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.634303 4799 scope.go:117] "RemoveContainer" containerID="be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.649952 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.672758 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.697567 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.714039 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.734165 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.736663 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.736744 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.736764 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.736796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.736817 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.750744 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.761450 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.772077 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.782488 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.803520 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.818195 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.836585 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.839442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.839572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.839656 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.839742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.839819 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.857443 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.872426 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.885149 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.900016 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.915696 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.942637 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.942682 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.942691 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.942707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:44 crc kubenswrapper[4799]: I0216 12:32:44.942718 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:44Z","lastTransitionTime":"2026-02-16T12:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.045974 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.046016 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.046028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.046044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.046055 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.117529 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:52:21.581232377 +0000 UTC Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.148445 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.148548 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:45 crc kubenswrapper[4799]: E0216 12:32:45.148589 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.148748 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:45 crc kubenswrapper[4799]: E0216 12:32:45.148797 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.148944 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:45 crc kubenswrapper[4799]: E0216 12:32:45.148999 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.149036 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.149254 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.149275 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.149296 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.149311 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: E0216 12:32:45.149193 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.174694 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.192756 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.209684 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.220776 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.244623 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.254249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.254311 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.254326 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.254350 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.254367 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.269836 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.291087 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.308077 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.323042 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.340341 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.357844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.357907 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.357927 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.357950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.357966 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.358643 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.372496 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.383527 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.394921 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.404206 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.414691 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.423349 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.460198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.460266 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.460278 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.460299 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.460314 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.563053 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.563094 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.563103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.563117 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.563139 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.638539 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/0.log" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.638604 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerStarted","Data":"c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.653146 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.666479 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.666523 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.666533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.666550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.666562 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.666927 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.680719 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.692269 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.706428 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.724279 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.737030 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.749435 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.765501 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.771490 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.771776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.771862 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.771959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.772048 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.784957 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.800103 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.814725 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.827001 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.839816 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.850901 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.863029 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.872741 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.874334 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.874382 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.874395 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.874411 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.874421 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.977731 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.977784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.977796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.977813 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:45 crc kubenswrapper[4799]: I0216 12:32:45.977828 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:45Z","lastTransitionTime":"2026-02-16T12:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.080427 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.080499 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.080520 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.080546 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.080566 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.118088 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:24:30.163521015 +0000 UTC Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.183033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.183111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.183154 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.183171 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.183183 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.285108 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.285174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.285183 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.285198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.285209 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.388319 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.388375 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.388389 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.388409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.388425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.491059 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.491112 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.491147 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.491161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.491171 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.594751 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.594838 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.594856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.594882 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.594899 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.697706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.697753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.697763 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.697776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.697788 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.800587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.800631 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.800640 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.800655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.800669 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.902544 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.902588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.902598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.902615 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:46 crc kubenswrapper[4799]: I0216 12:32:46.902627 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:46Z","lastTransitionTime":"2026-02-16T12:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.005057 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.005103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.005112 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.005367 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.005392 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.107666 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.107701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.107712 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.107729 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.107743 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.118850 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:50:20.529079765 +0000 UTC Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.150332 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:47 crc kubenswrapper[4799]: E0216 12:32:47.150471 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.150482 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:47 crc kubenswrapper[4799]: E0216 12:32:47.150591 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.150650 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.150799 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:47 crc kubenswrapper[4799]: E0216 12:32:47.150961 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:47 crc kubenswrapper[4799]: E0216 12:32:47.151017 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.209963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.210011 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.210021 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.210037 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.210049 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.312642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.312962 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.313035 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.313108 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.313208 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.415859 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.415915 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.415932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.415957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.415977 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.519698 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.519753 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.519771 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.519794 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.519813 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.622537 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.622597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.622614 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.622639 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.622659 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.724874 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.725022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.725056 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.725086 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.725108 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.827558 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.827618 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.827636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.827659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.827677 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.930228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.930286 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.930303 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.930329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:47 crc kubenswrapper[4799]: I0216 12:32:47.930349 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:47Z","lastTransitionTime":"2026-02-16T12:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.033282 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.033365 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.033381 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.033409 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.033425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.119299 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:20:58.007624128 +0000 UTC Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.137024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.137080 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.137092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.137111 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.137140 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.239703 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.239750 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.239761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.239778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.239791 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.342868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.342934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.342953 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.342978 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.342996 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.445955 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.446012 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.446024 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.446044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.446058 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.549163 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.549231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.549249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.549277 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.549333 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.652011 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.652069 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.652083 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.652104 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.652120 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.755013 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.755094 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.755153 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.755176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.755217 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.858848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.858926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.858944 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.858973 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.858992 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.962172 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.962248 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.962268 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.962304 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:48 crc kubenswrapper[4799]: I0216 12:32:48.962325 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:48Z","lastTransitionTime":"2026-02-16T12:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.084424 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.084496 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.084516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.084543 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.084565 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.120352 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:05:00.748136606 +0000 UTC Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.148979 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.148996 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.148996 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:49 crc kubenswrapper[4799]: E0216 12:32:49.149306 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:49 crc kubenswrapper[4799]: E0216 12:32:49.149364 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:49 crc kubenswrapper[4799]: E0216 12:32:49.149540 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.149872 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:49 crc kubenswrapper[4799]: E0216 12:32:49.150016 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.187741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.187803 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.187820 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.187850 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.187896 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.291717 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.291784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.291806 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.291834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.291853 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.395284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.395360 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.395378 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.395405 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.395426 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.499168 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.499228 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.499248 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.499274 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.499290 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.603810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.603881 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.603899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.603926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.603946 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.706759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.706838 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.706856 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.706892 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.706913 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.810716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.810780 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.810798 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.810831 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.810853 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.914963 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.915022 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.915040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.915068 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:49 crc kubenswrapper[4799]: I0216 12:32:49.915087 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:49Z","lastTransitionTime":"2026-02-16T12:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.018774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.018829 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.018848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.018873 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.018893 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.121468 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:46:34.551465571 +0000 UTC Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.121667 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.121716 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.121740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.121775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.121794 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.224301 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.224429 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.224449 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.224480 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.224506 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.328082 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.328204 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.328231 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.328271 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.328293 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.430912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.430956 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.430968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.430988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.431002 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.534676 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.534733 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.534749 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.534779 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.534797 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.638028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.638084 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.638103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.638348 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.638376 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.741769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.741845 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.741918 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.741948 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.742036 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.845109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.845215 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.845235 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.845261 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.845280 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.947669 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.947775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.947791 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.947814 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:50 crc kubenswrapper[4799]: I0216 12:32:50.947831 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:50Z","lastTransitionTime":"2026-02-16T12:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.051174 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.051261 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.051280 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.051310 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.051344 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.122361 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:49:08.471930679 +0000 UTC Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.149191 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.149294 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.149378 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.149262 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:51 crc kubenswrapper[4799]: E0216 12:32:51.149464 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:51 crc kubenswrapper[4799]: E0216 12:32:51.149593 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:51 crc kubenswrapper[4799]: E0216 12:32:51.149716 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:51 crc kubenswrapper[4799]: E0216 12:32:51.149801 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.155004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.155057 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.155076 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.155099 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.155120 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.258400 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.258870 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.259019 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.259288 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.259472 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.363645 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.363723 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.363747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.363778 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.363801 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.466548 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.466590 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.466600 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.466616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.466626 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.570620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.570688 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.570701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.570724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.570739 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.673932 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.673970 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.673986 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.674007 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.674020 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.778585 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.778986 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.779160 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.779323 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.779546 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.883034 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.883098 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.883113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.883163 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.883182 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.987068 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.987210 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.987241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.987273 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:51 crc kubenswrapper[4799]: I0216 12:32:51.987295 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:51Z","lastTransitionTime":"2026-02-16T12:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.090301 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.090342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.090355 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.090371 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.090385 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.123175 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:38:25.869433512 +0000 UTC Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.182383 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.194072 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.194165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.194190 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.194216 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.194236 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.297598 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.297646 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.297657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.297677 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.297694 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.401453 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.401544 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.401570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.401601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.401624 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.505796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.505868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.505888 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.505912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.505932 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.610164 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.610241 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.610260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.610322 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.610345 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.714290 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.714384 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.714416 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.714448 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.714472 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.818249 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.818321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.818340 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.818370 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.818390 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.922720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.922784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.922810 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.922838 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:52 crc kubenswrapper[4799]: I0216 12:32:52.922857 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:52Z","lastTransitionTime":"2026-02-16T12:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.003925 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.004001 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.004018 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.004045 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.004068 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.029537 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.036044 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.036155 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.036176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.036206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.036226 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.060233 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.066747 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.066820 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.066844 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.066875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.066898 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.090307 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.096678 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.096738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.096759 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.096784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.096805 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.123331 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:29:06.755582973 +0000 UTC Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.127755 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.132968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.133028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.133042 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.133064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.133077 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.148746 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.148810 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.148837 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.149015 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.149067 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.149256 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.149490 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.149643 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.154515 4799 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60d89bd8-e3f6-4a9b-86b3-b3b67634d734\\\",\\\"systemUUID\\\":\\\"25cac3c5-4ae9-4428-b3ff-f389dbe91e52\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:53 crc kubenswrapper[4799]: E0216 12:32:53.154860 4799 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.156550 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.156590 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.156601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.156616 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.156629 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.259587 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.259630 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.259640 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.259655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.259666 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.363169 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.363295 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.363315 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.363342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.363363 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.466497 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.466566 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.466585 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.466613 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.466629 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.569846 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.569911 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.569926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.569948 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.569968 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.672842 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.672926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.672947 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.672976 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.672997 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.776950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.777033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.777052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.777087 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.777107 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.880830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.880908 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.880931 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.880959 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.880982 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.984535 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.984602 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.984622 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.984649 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:53 crc kubenswrapper[4799]: I0216 12:32:53.984667 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:53Z","lastTransitionTime":"2026-02-16T12:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.087633 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.087709 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.087740 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.087769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.087788 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.124080 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:29:22.08401643 +0000 UTC Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.149962 4799 scope.go:117] "RemoveContainer" containerID="9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.191206 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.191283 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.191312 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.191351 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.191379 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.295165 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.295250 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.295272 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.295299 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.295322 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.398561 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.398620 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.398637 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.398674 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.398698 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.501361 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.501403 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.501418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.501440 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.501454 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.605784 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.605857 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.605875 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.605902 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.605926 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.676648 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/2.log" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.681450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.682299 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.708920 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.708981 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.708994 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.709014 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.709029 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.710440 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.731597 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.757528 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.775640 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.803270 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.811858 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.811920 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.811937 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.811965 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.811985 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.823768 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.839257 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.852982 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.871530 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.884465 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.894651 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eb8b34e-4a74-4dfa-a673-7ac3defabf04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80217237c504698bf142a9eb0ffd021fb6fef992af71b475092d23cc32676cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.908262 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.915222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.915284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.915299 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.915321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.915334 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:54Z","lastTransitionTime":"2026-02-16T12:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.922004 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.939905 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.961820 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.976329 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:54 crc kubenswrapper[4799]: I0216 12:32:54.993669 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:54Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.007095 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.017726 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.017762 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.017776 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.017796 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.017808 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.121065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.121224 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.121258 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.121300 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.121326 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.125193 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:18:28.79044637 +0000 UTC Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.149024 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.149104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.149238 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:55 crc kubenswrapper[4799]: E0216 12:32:55.149280 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.149305 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:55 crc kubenswrapper[4799]: E0216 12:32:55.149418 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:55 crc kubenswrapper[4799]: E0216 12:32:55.149574 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:55 crc kubenswrapper[4799]: E0216 12:32:55.149666 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.170355 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.184378 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.203293 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.224705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.225103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.225216 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.225388 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.225625 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.225651 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.242091 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.255576 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eb8b34e-4a74-4dfa-a673-7ac3defabf04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80217237c504698bf142a9eb0ffd021fb6fef992af71b475092d23cc32676cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.269327 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.285222 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.306105 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.328980 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.329031 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.329040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.329059 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.329071 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.331074 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.349859 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.370289 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.391899 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.410779 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.429027 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.432025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.432060 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.432070 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.432085 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.432098 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.446101 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.467500 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.483722 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.535848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.535938 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.535960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.535992 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.536013 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.638936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.638987 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.638998 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.639015 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.639027 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.694612 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/3.log" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.695681 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/2.log" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.700243 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" exitCode=1 Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.700299 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.700357 4799 scope.go:117] "RemoveContainer" containerID="9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.701245 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:32:55 crc kubenswrapper[4799]: E0216 12:32:55.701436 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.718379 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.736408 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.741443 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.741502 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.741517 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.741538 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.741549 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.754588 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.770141 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.786397 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.799884 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.815886 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eb8b34e-4a74-4dfa-a673-7ac3defabf04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80217237c504698bf142a9eb0ffd021fb6fef992af71b475092d23cc32676cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.829948 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.840171 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.843854 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.843924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.843935 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.843950 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.843961 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.856642 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.875329 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddbf7d81d1c569a5bce5a134021f85f0231776c75f6c7631b28e817aa8a9ba8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:26Z\\\",\\\"message\\\":\\\"1.647737ms\\\\nI0216 12:32:26.075504 6470 services_controller.go:356] Processing sync for service openshift-cluster-machine-approver/machine-approver for network=default\\\\nI0216 12:32:26.075511 6470 services_controller.go:360] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver for network=default : 7.48µs\\\\nI0216 12:32:26.075521 6470 services_controller.go:356] Processing sync for service openshift-apiserver/api for network=default\\\\nF0216 12:32:26.075355 6470 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:26Z is after 2025-08-24T17:21:41Z]\\\\nI0216 12:32:26.075501 6470 model_cli\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:55Z\\\",\\\"message\\\":\\\" 6863 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:55.250293 6863 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:55.250299 6863 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:55.250323 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:55.250365 6863 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:55.250412 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:55.250434 6863 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:55.250443 6863 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:55.250453 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:55.250472 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:32:55.250493 6863 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:55.250530 6863 factory.go:656] Stopping watch factory\\\\nI0216 12:32:55.250556 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:55.250607 6863 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:55.250629 6863 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 12:32:55.250729 6863 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.887119 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.899341 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.912032 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.924355 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.936076 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.946354 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.946384 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.946392 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.946407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.946416 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:55Z","lastTransitionTime":"2026-02-16T12:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.946846 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:55 crc kubenswrapper[4799]: I0216 12:32:55.961035 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:55Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.049687 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.049739 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.049752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.049770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.049783 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.126172 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:55:06.026286527 +0000 UTC Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.153694 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.153791 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.153824 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.153863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.153885 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.256584 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.256653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.256671 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.256696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.256714 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.360594 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.361054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.361201 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.361344 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.361476 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.464996 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.465818 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.466279 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.466494 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.466708 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.570735 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.570793 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.570801 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.570819 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.570832 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.673654 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.673696 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.673705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.673719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.673728 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.706902 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/3.log" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.715012 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:32:56 crc kubenswrapper[4799]: E0216 12:32:56.715365 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.740153 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e971d9-2ab6-4f2e-ad1a-979f4213dfea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:31:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 12:31:48.833539 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:31:48.835606 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1171794533/tls.crt::/tmp/serving-cert-1171794533/tls.key\\\\\\\"\\\\nI0216 12:31:54.919312 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:31:54.925617 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:31:54.925739 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:31:54.925814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:31:54.925859 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:31:54.932687 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:31:54.932708 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:31:54.932717 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:31:54.932721 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:31:54.932724 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:31:54.932727 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:31:54.932780 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:31:54.935910 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.763099 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.776604 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.776666 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.776685 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.776711 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.776731 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.781519 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zl9jj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"127d928e-7ce1-44a2-976e-de7017f78747\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ad01eab46e20bf56456c08eafd0c0c6678628f35c4e9802a9a1332387a3e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zl9jj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.807386 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd92d23b-8231-4e15-8dd4-5b912d6b6b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bad4308100a181431f48f9b209cfca3ba46813f7d7dd23654ed4df2beb67ca9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e710e748b4b9ff1d0012cda30c566c271624f1c6410c7de4db0f4ab5f9e753bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7370bfb870cdb00097ad0511a57d28a91733697d5ce0a1187abeef563183c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ea79a508b71284f7c3b3d6838e6611717b6c997cf9d42229f3fb074ce3a72a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90a6526a54109ce41e09f6c80c1cfdffd5e60d4e10e089efe7c9e115cc834021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a128433f92c0b06b6e253684984aa289a99c17260277b352fcbb6fde7b12cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cab1a146b45eb56efa0c8ba86943ef7cb56e49830ea32f4ba1c9a17a3837dae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:32:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nb7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4p4qf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.840859 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae13b0a-1f69-476d-a552-4467fcedac14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:55Z\\\",\\\"message\\\":\\\" 6863 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:32:55.250293 6863 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:32:55.250299 6863 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:32:55.250323 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:32:55.250365 6863 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:32:55.250412 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 12:32:55.250434 6863 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:32:55.250443 6863 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:32:55.250453 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:32:55.250472 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:32:55.250493 6863 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:32:55.250530 6863 factory.go:656] Stopping watch factory\\\\nI0216 12:32:55.250556 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:32:55.250607 6863 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:32:55.250629 6863 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 12:32:55.250729 6863 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:32:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mcvk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzcq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.857191 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2928b5d2-c9e0-4865-b99e-7aa13e3cdb66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a8552eed8df94ce9a237bbe930c0a2d4cbf3e0ac7ac5b1dfcf82e1855ca217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075b84020126d4fb3687da68561f73415d651419699b2dff11304ae36df2cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6k5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ddt84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.873197 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ba7a265-d264-4289-b7e6-4fd3960833cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418a98849459af486025b199bcd3371cd6c78d117c78ff3fb93c51eae7c160a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9be7ab1f0026f4220ef91b159ce9c343d525de5a68d951280c69acb75a806023\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f331009db7cc0bf0e614beeeffa0ef50a17a2b7d1724e7eacb585fc9380732e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.879468 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.879554 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.879581 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.879609 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.879636 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.891395 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.912028 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df2347dad14807e45c890a0e3c4a1f340422b10aa0c0c9504a58448f3bc19f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.932655 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36db86c-3626-446f-8410-7e1f42ed16e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabb699929e8d4141438f02a3beef44f4309fc3a75648ff7993598131ca3b7d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtrjz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6dl99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.952340 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2clkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxkfk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2clkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.973291 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca89c3d2-9726-44d8-afdf-7c7af64c0a3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b45c662fde4f865136f2880c6f279d2fb1fc685d13d35c568ab8afdba4ec034c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://406e53e46a5855031a3b7205f4e32f06e450268c5baa69017a4a905e54885c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864f239fc89f48a0101d21093cc5cc4430750f713b2f585c8e68dd98454d1bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f9558966ba4d52a5be6f320d61a0cbb490a93259e8147003aac6f5579bf5479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.983425 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.983509 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.983540 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.983576 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.983600 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:56Z","lastTransitionTime":"2026-02-16T12:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:56 crc kubenswrapper[4799]: I0216 12:32:56.996806 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ad67f9d8df4a3ce6b640245cd21238454f6216b55e097d49b6aaaa4a1b9a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:56Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.021335 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5bd43a676b349045483bb2dcedbf96dc706cae1d639c0a7e8a033388c123a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3820fe4fdd2cf8c2889e284b5e2901c89f392840df15b53ab4c77ee7c92b7284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.043703 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7j77r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff442c08-09db-4354-b9be-b43956019ba7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:32:44Z\\\",\\\"message\\\":\\\"2026-02-16T12:31:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8\\\\n2026-02-16T12:31:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0bc8be3c-ec70-4e72-b357-cce7f9cd85f8 to /host/opt/cni/bin/\\\\n2026-02-16T12:31:59Z [verbose] multus-daemon started\\\\n2026-02-16T12:31:59Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:32:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4w6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7j77r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.063442 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l8kgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa8c3669-05bd-45dd-8769-b8dac50ff193\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7827734c89f61e6270057e2cbb0a6df7b4d572621936b9d26a82236edf27d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wb6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:32:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l8kgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.080862 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eb8b34e-4a74-4dfa-a673-7ac3defabf04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80217237c504698bf142a9eb0ffd021fb6fef992af71b475092d23cc32676cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4beaad91830478cfe6a8ab039cac96cd73e245ec859ba55d3acd69ce487edf92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:31:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:31:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:31:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.087106 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.087211 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.087236 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.087269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.087294 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.105085 4799 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:31:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:32:57Z is after 2025-08-24T17:21:41Z" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.127714 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:52:38.673792635 +0000 UTC Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.149252 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.149273 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.149360 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:57 crc kubenswrapper[4799]: E0216 12:32:57.149546 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.149838 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:57 crc kubenswrapper[4799]: E0216 12:32:57.149950 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:57 crc kubenswrapper[4799]: E0216 12:32:57.150226 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:57 crc kubenswrapper[4799]: E0216 12:32:57.150614 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.190208 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.190290 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.190311 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.190335 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.190354 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.294390 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.294475 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.294495 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.294526 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.294547 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.398307 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.398442 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.398659 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.398742 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.398766 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.501966 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.502020 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.502039 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.502064 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.502085 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.605761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.605823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.605848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.605884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.605912 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.708533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.708583 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.708601 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.708627 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.708646 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.812098 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.812179 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.812199 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.812225 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.812241 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.914811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.914899 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.914924 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.914960 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:57 crc kubenswrapper[4799]: I0216 12:32:57.914988 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:57Z","lastTransitionTime":"2026-02-16T12:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.018469 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.018509 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.018518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.018533 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.018543 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.121373 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.121459 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.121485 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.121518 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.121543 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.128901 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:16:38.07731965 +0000 UTC Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.224934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.225469 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.225615 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.225772 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.225900 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.329917 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.329990 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.330004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.330025 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.330046 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.432642 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.432719 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.432738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.432761 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.432785 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.535575 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.535649 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.535669 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.535706 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.535732 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.639720 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.639789 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.639811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.639835 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.639856 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.749886 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.749957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.749976 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.750004 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.750023 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.826639 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.826865 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:02.826823688 +0000 UTC m=+148.419839052 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.853636 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.853688 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.853702 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.853724 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.853740 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.929245 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.929331 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.929412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.929457 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929630 4799 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929665 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929686 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929716 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929727 4799 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929741 4799 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929752 4799 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929730 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:02.929703276 +0000 UTC m=+148.522718610 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929836 4799 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929866 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:02.929831819 +0000 UTC m=+148.522847303 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929902 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:02.929886231 +0000 UTC m=+148.522901855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:32:58 crc kubenswrapper[4799]: E0216 12:32:58.929944 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:02.929926432 +0000 UTC m=+148.522942036 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.957705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.957770 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.957790 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.957831 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:58 crc kubenswrapper[4799]: I0216 12:32:58.957852 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:58Z","lastTransitionTime":"2026-02-16T12:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.061657 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.061728 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.061746 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.061774 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.061793 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.130011 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:36:10.429463444 +0000 UTC Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.148730 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.148891 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.148937 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:32:59 crc kubenswrapper[4799]: E0216 12:32:59.149060 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.149172 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:32:59 crc kubenswrapper[4799]: E0216 12:32:59.149365 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:32:59 crc kubenswrapper[4799]: E0216 12:32:59.149825 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:32:59 crc kubenswrapper[4799]: E0216 12:32:59.150091 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.164922 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.164997 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.165023 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.165054 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.165079 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.268968 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.269036 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.269050 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.269075 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.269088 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.372988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.373052 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.373065 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.373097 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.373112 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.477043 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.477092 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.477103 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.477120 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.477149 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.581083 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.581331 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.581361 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.581397 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.581425 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.684269 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.684346 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.684366 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.684394 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.684421 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.786903 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.786934 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.786942 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.786956 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.786964 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.889501 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.889557 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.889572 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.889588 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.889598 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.992884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.992988 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.993009 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.993067 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:32:59 crc kubenswrapper[4799]: I0216 12:32:59.993089 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:32:59Z","lastTransitionTime":"2026-02-16T12:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.096926 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.096979 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.096991 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.097010 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.097026 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.130430 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:05:39.598739122 +0000 UTC Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.200015 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.200077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.200091 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.200109 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.200150 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.303405 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.303516 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.303569 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.303604 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.303659 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.407222 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.407300 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.407321 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.407347 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.407367 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.511313 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.511430 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.511450 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.511506 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.511526 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.615407 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.615525 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.615578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.615611 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.615664 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.718591 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.718655 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.718675 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.718701 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.718722 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.824232 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.824357 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.824422 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.824454 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.824474 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.927912 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.928396 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.928418 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.928445 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:00 crc kubenswrapper[4799]: I0216 12:33:00.928463 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:00Z","lastTransitionTime":"2026-02-16T12:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.033161 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.033240 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.033260 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.033291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.033311 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.130893 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:34:09.560407019 +0000 UTC Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.139235 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.139291 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.139312 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.139342 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.139405 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.148522 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.148648 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:01 crc kubenswrapper[4799]: E0216 12:33:01.148726 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:01 crc kubenswrapper[4799]: E0216 12:33:01.148880 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.148987 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:01 crc kubenswrapper[4799]: E0216 12:33:01.149184 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.149442 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:01 crc kubenswrapper[4799]: E0216 12:33:01.149555 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.243848 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.244089 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.244113 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.244198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.244228 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.348707 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.348769 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.348786 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.348816 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.348835 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.452243 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.452329 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.452356 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.452393 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.452418 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.557434 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.557521 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.557542 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.557570 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.557591 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.661741 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.661823 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.661841 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.661872 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.661892 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.765837 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.765897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.765916 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.765943 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.765964 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.869736 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.869834 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.869863 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.869897 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.869921 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.974361 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.974456 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.974492 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.974526 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:01 crc kubenswrapper[4799]: I0216 12:33:01.974551 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:01Z","lastTransitionTime":"2026-02-16T12:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.077948 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.078033 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.078055 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.078088 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.078109 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.131528 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:44:21.747274077 +0000 UTC Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.173951 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.181096 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.181176 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.181197 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.181221 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.181242 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.284141 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.284198 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.284215 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.284237 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.284252 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.387936 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.388040 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.388061 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.388100 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.388152 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.491573 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.491649 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.491670 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.491705 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.491732 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.595485 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.595586 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.595612 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.595653 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.595679 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.699565 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.699647 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.699815 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.699957 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.699979 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.803775 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.803868 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.803891 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.803920 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.803941 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.907199 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.907253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.907265 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.907284 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:02 crc kubenswrapper[4799]: I0216 12:33:02.907301 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:02Z","lastTransitionTime":"2026-02-16T12:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.010669 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.010735 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.010752 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.010783 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.010804 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:03Z","lastTransitionTime":"2026-02-16T12:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.113940 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.114028 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.114048 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.114077 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.114104 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:03Z","lastTransitionTime":"2026-02-16T12:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.132313 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:51:30.60120484 +0000 UTC Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.148881 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.148920 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.148949 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.148909 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:03 crc kubenswrapper[4799]: E0216 12:33:03.149163 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:03 crc kubenswrapper[4799]: E0216 12:33:03.149512 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:03 crc kubenswrapper[4799]: E0216 12:33:03.149724 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:03 crc kubenswrapper[4799]: E0216 12:33:03.149999 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.217062 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.217191 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.217215 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.217253 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.217274 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:03Z","lastTransitionTime":"2026-02-16T12:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.320738 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.320811 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.320828 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.320855 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.320875 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:03Z","lastTransitionTime":"2026-02-16T12:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.424508 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.424578 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.424597 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.424626 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.424645 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:03Z","lastTransitionTime":"2026-02-16T12:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.464830 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.464884 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.464896 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.464914 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.464969 4799 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:33:03Z","lastTransitionTime":"2026-02-16T12:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.545740 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr"] Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.546438 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.549199 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.549502 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.549515 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.551756 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.586380 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.586350953 podStartE2EDuration="36.586350953s" podCreationTimestamp="2026-02-16 12:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.581178456 +0000 UTC m=+89.174193850" watchObservedRunningTime="2026-02-16 12:33:03.586350953 +0000 UTC m=+89.179366317" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.675886 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7j77r" podStartSLOduration=67.675848111 podStartE2EDuration="1m7.675848111s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.660245649 +0000 UTC m=+89.253261023" watchObservedRunningTime="2026-02-16 12:33:03.675848111 +0000 UTC m=+89.268863475" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.676557 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l8kgf" podStartSLOduration=67.676549111 podStartE2EDuration="1m7.676549111s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.675388578 +0000 UTC m=+89.268403952" watchObservedRunningTime="2026-02-16 12:33:03.676549111 +0000 UTC m=+89.269564485" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.693455 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9428597d-a59a-4980-a195-6d1b2ef6c971-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.693520 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9428597d-a59a-4980-a195-6d1b2ef6c971-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.693571 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9428597d-a59a-4980-a195-6d1b2ef6c971-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.693607 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9428597d-a59a-4980-a195-6d1b2ef6c971-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.693644 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9428597d-a59a-4980-a195-6d1b2ef6c971-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.718039 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.717997856 podStartE2EDuration="1.717997856s" podCreationTimestamp="2026-02-16 12:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.71742111 +0000 UTC m=+89.310436484" watchObservedRunningTime="2026-02-16 12:33:03.717997856 +0000 UTC m=+89.311013220" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.761914 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.761880681 podStartE2EDuration="11.761880681s" podCreationTimestamp="2026-02-16 12:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.761018846 +0000 UTC m=+89.354034220" watchObservedRunningTime="2026-02-16 12:33:03.761880681 +0000 UTC m=+89.354896045" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.795209 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9428597d-a59a-4980-a195-6d1b2ef6c971-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.795327 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9428597d-a59a-4980-a195-6d1b2ef6c971-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.795386 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9428597d-a59a-4980-a195-6d1b2ef6c971-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.795448 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9428597d-a59a-4980-a195-6d1b2ef6c971-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.795561 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9428597d-a59a-4980-a195-6d1b2ef6c971-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.797464 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9428597d-a59a-4980-a195-6d1b2ef6c971-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.795574 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9428597d-a59a-4980-a195-6d1b2ef6c971-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.797785 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9428597d-a59a-4980-a195-6d1b2ef6c971-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.806428 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zl9jj" podStartSLOduration=67.806391203 podStartE2EDuration="1m7.806391203s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.804383976 +0000 UTC m=+89.397399340" watchObservedRunningTime="2026-02-16 12:33:03.806391203 +0000 UTC m=+89.399406577" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.809350 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9428597d-a59a-4980-a195-6d1b2ef6c971-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.829039 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9428597d-a59a-4980-a195-6d1b2ef6c971-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zzhpr\" (UID: \"9428597d-a59a-4980-a195-6d1b2ef6c971\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.838495 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4p4qf" podStartSLOduration=67.838466703 podStartE2EDuration="1m7.838466703s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.837913837 +0000 UTC m=+89.430929211" watchObservedRunningTime="2026-02-16 12:33:03.838466703 +0000 UTC m=+89.431482077" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.872320 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.934271 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ddt84" podStartSLOduration=67.934235448 podStartE2EDuration="1m7.934235448s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.9145459 +0000 UTC m=+89.507561244" watchObservedRunningTime="2026-02-16 12:33:03.934235448 +0000 UTC m=+89.527250792" Feb 16 12:33:03 crc kubenswrapper[4799]: I0216 12:33:03.963845 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.963816807 podStartE2EDuration="1m8.963816807s" podCreationTimestamp="2026-02-16 12:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:03.936741939 +0000 UTC m=+89.529757273" watchObservedRunningTime="2026-02-16 12:33:03.963816807 +0000 UTC m=+89.556832151" Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.025709 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podStartSLOduration=68.025684361 podStartE2EDuration="1m8.025684361s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:04.010399878 +0000 UTC m=+89.603415212" watchObservedRunningTime="2026-02-16 12:33:04.025684361 +0000 UTC m=+89.618699695" Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.133440 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:15:28.661173467 +0000 UTC Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.133535 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.143965 4799 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.745996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" event={"ID":"9428597d-a59a-4980-a195-6d1b2ef6c971","Type":"ContainerStarted","Data":"ec2881c70eb138d33dc403c892b30c20a2e4418c4401014eae641601b8af606d"} Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.746102 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" event={"ID":"9428597d-a59a-4980-a195-6d1b2ef6c971","Type":"ContainerStarted","Data":"1ab221afef3842aecc5c1c9136155b2620d80dac03e2e877d96cf1742ef57d24"} Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.770647 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zzhpr" podStartSLOduration=68.770613696 podStartE2EDuration="1m8.770613696s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:04.769284548 +0000 UTC m=+90.362299952" watchObservedRunningTime="2026-02-16 12:33:04.770613696 +0000 UTC m=+90.363629070" Feb 16 12:33:04 crc kubenswrapper[4799]: I0216 12:33:04.770825 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.770813681 podStartE2EDuration="1m9.770813681s" podCreationTimestamp="2026-02-16 12:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:04.052802901 +0000 UTC m=+89.645818245" watchObservedRunningTime="2026-02-16 12:33:04.770813681 +0000 UTC m=+90.363829045" Feb 16 12:33:05 crc kubenswrapper[4799]: I0216 12:33:05.148819 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:05 crc kubenswrapper[4799]: I0216 12:33:05.149043 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:05 crc kubenswrapper[4799]: E0216 12:33:05.151283 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:05 crc kubenswrapper[4799]: I0216 12:33:05.151373 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:05 crc kubenswrapper[4799]: E0216 12:33:05.151542 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:05 crc kubenswrapper[4799]: E0216 12:33:05.151836 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:05 crc kubenswrapper[4799]: I0216 12:33:05.152111 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:05 crc kubenswrapper[4799]: E0216 12:33:05.152342 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:07 crc kubenswrapper[4799]: I0216 12:33:07.149278 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:07 crc kubenswrapper[4799]: I0216 12:33:07.149278 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:07 crc kubenswrapper[4799]: I0216 12:33:07.149584 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:07 crc kubenswrapper[4799]: E0216 12:33:07.149726 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:07 crc kubenswrapper[4799]: I0216 12:33:07.149755 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:07 crc kubenswrapper[4799]: I0216 12:33:07.150057 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:33:07 crc kubenswrapper[4799]: E0216 12:33:07.150068 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:07 crc kubenswrapper[4799]: E0216 12:33:07.150063 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:07 crc kubenswrapper[4799]: E0216 12:33:07.150219 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:33:07 crc kubenswrapper[4799]: E0216 12:33:07.150239 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:09 crc kubenswrapper[4799]: I0216 12:33:09.148975 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:09 crc kubenswrapper[4799]: I0216 12:33:09.149143 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:09 crc kubenswrapper[4799]: I0216 12:33:09.149170 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:09 crc kubenswrapper[4799]: I0216 12:33:09.149195 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:09 crc kubenswrapper[4799]: E0216 12:33:09.151114 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:09 crc kubenswrapper[4799]: E0216 12:33:09.151329 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:09 crc kubenswrapper[4799]: E0216 12:33:09.151284 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:09 crc kubenswrapper[4799]: E0216 12:33:09.151217 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:11 crc kubenswrapper[4799]: I0216 12:33:11.148246 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:11 crc kubenswrapper[4799]: I0216 12:33:11.148350 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:11 crc kubenswrapper[4799]: E0216 12:33:11.148397 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:11 crc kubenswrapper[4799]: E0216 12:33:11.148509 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:11 crc kubenswrapper[4799]: I0216 12:33:11.148732 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:11 crc kubenswrapper[4799]: E0216 12:33:11.149012 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:11 crc kubenswrapper[4799]: I0216 12:33:11.149953 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:11 crc kubenswrapper[4799]: E0216 12:33:11.150374 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:13 crc kubenswrapper[4799]: I0216 12:33:13.149356 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:13 crc kubenswrapper[4799]: I0216 12:33:13.149413 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:13 crc kubenswrapper[4799]: I0216 12:33:13.149294 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:13 crc kubenswrapper[4799]: I0216 12:33:13.149357 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:13 crc kubenswrapper[4799]: E0216 12:33:13.149584 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:13 crc kubenswrapper[4799]: E0216 12:33:13.149762 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:13 crc kubenswrapper[4799]: E0216 12:33:13.149922 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:13 crc kubenswrapper[4799]: E0216 12:33:13.150007 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:15 crc kubenswrapper[4799]: I0216 12:33:15.150518 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:15 crc kubenswrapper[4799]: I0216 12:33:15.150553 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:15 crc kubenswrapper[4799]: E0216 12:33:15.150702 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:15 crc kubenswrapper[4799]: I0216 12:33:15.150519 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:15 crc kubenswrapper[4799]: E0216 12:33:15.151003 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:15 crc kubenswrapper[4799]: I0216 12:33:15.151144 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:15 crc kubenswrapper[4799]: E0216 12:33:15.151196 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:15 crc kubenswrapper[4799]: E0216 12:33:15.151274 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:15 crc kubenswrapper[4799]: I0216 12:33:15.537868 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:15 crc kubenswrapper[4799]: E0216 12:33:15.538164 4799 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:33:15 crc kubenswrapper[4799]: E0216 12:33:15.538303 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs podName:e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:19.538275385 +0000 UTC m=+165.131290949 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs") pod "network-metrics-daemon-2clkm" (UID: "e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:33:17 crc kubenswrapper[4799]: I0216 12:33:17.148404 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:17 crc kubenswrapper[4799]: I0216 12:33:17.148526 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:17 crc kubenswrapper[4799]: I0216 12:33:17.148600 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:17 crc kubenswrapper[4799]: E0216 12:33:17.148786 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:17 crc kubenswrapper[4799]: I0216 12:33:17.148821 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:17 crc kubenswrapper[4799]: E0216 12:33:17.149010 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:17 crc kubenswrapper[4799]: E0216 12:33:17.149195 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:17 crc kubenswrapper[4799]: E0216 12:33:17.149359 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:19 crc kubenswrapper[4799]: I0216 12:33:19.148712 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:19 crc kubenswrapper[4799]: I0216 12:33:19.148785 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:19 crc kubenswrapper[4799]: I0216 12:33:19.148739 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:19 crc kubenswrapper[4799]: E0216 12:33:19.148939 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:19 crc kubenswrapper[4799]: E0216 12:33:19.149108 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:19 crc kubenswrapper[4799]: E0216 12:33:19.149266 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:19 crc kubenswrapper[4799]: I0216 12:33:19.149462 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:19 crc kubenswrapper[4799]: E0216 12:33:19.149624 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:21 crc kubenswrapper[4799]: I0216 12:33:21.149226 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:21 crc kubenswrapper[4799]: I0216 12:33:21.149256 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:21 crc kubenswrapper[4799]: I0216 12:33:21.149329 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:21 crc kubenswrapper[4799]: I0216 12:33:21.149344 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:21 crc kubenswrapper[4799]: E0216 12:33:21.150672 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:21 crc kubenswrapper[4799]: E0216 12:33:21.151407 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:21 crc kubenswrapper[4799]: E0216 12:33:21.151957 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:21 crc kubenswrapper[4799]: E0216 12:33:21.152157 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:22 crc kubenswrapper[4799]: I0216 12:33:22.149216 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:33:22 crc kubenswrapper[4799]: E0216 12:33:22.149464 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:33:23 crc kubenswrapper[4799]: I0216 12:33:23.148961 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:23 crc kubenswrapper[4799]: E0216 12:33:23.149301 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:23 crc kubenswrapper[4799]: I0216 12:33:23.149832 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:23 crc kubenswrapper[4799]: E0216 12:33:23.149974 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:23 crc kubenswrapper[4799]: I0216 12:33:23.150520 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:23 crc kubenswrapper[4799]: E0216 12:33:23.150686 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:23 crc kubenswrapper[4799]: I0216 12:33:23.150782 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:23 crc kubenswrapper[4799]: E0216 12:33:23.150981 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:25 crc kubenswrapper[4799]: I0216 12:33:25.148911 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:25 crc kubenswrapper[4799]: I0216 12:33:25.148955 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:25 crc kubenswrapper[4799]: I0216 12:33:25.148926 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:25 crc kubenswrapper[4799]: I0216 12:33:25.149025 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:25 crc kubenswrapper[4799]: E0216 12:33:25.150275 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:25 crc kubenswrapper[4799]: E0216 12:33:25.150403 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:25 crc kubenswrapper[4799]: E0216 12:33:25.150491 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:25 crc kubenswrapper[4799]: E0216 12:33:25.150575 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:27 crc kubenswrapper[4799]: I0216 12:33:27.149234 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:27 crc kubenswrapper[4799]: I0216 12:33:27.149294 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:27 crc kubenswrapper[4799]: I0216 12:33:27.149265 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:27 crc kubenswrapper[4799]: I0216 12:33:27.149395 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:27 crc kubenswrapper[4799]: E0216 12:33:27.149617 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:27 crc kubenswrapper[4799]: E0216 12:33:27.149927 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:27 crc kubenswrapper[4799]: E0216 12:33:27.150047 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:27 crc kubenswrapper[4799]: E0216 12:33:27.150220 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:29 crc kubenswrapper[4799]: I0216 12:33:29.148309 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:29 crc kubenswrapper[4799]: I0216 12:33:29.148473 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:29 crc kubenswrapper[4799]: I0216 12:33:29.148316 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:29 crc kubenswrapper[4799]: E0216 12:33:29.148553 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:29 crc kubenswrapper[4799]: E0216 12:33:29.148751 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:29 crc kubenswrapper[4799]: I0216 12:33:29.148862 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:29 crc kubenswrapper[4799]: E0216 12:33:29.148893 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:29 crc kubenswrapper[4799]: E0216 12:33:29.149069 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:30 crc kubenswrapper[4799]: I0216 12:33:30.856616 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/1.log" Feb 16 12:33:30 crc kubenswrapper[4799]: I0216 12:33:30.857452 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/0.log" Feb 16 12:33:30 crc kubenswrapper[4799]: I0216 12:33:30.857528 4799 generic.go:334] "Generic (PLEG): container finished" podID="ff442c08-09db-4354-b9be-b43956019ba7" containerID="c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98" exitCode=1 Feb 16 12:33:30 crc kubenswrapper[4799]: I0216 12:33:30.857582 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerDied","Data":"c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98"} Feb 16 12:33:30 crc kubenswrapper[4799]: I0216 12:33:30.857645 4799 scope.go:117] "RemoveContainer" containerID="be43aef6e90e5ea64a5892a882614ee2deac26bd6f2978bfb92282603c5a364d" Feb 16 12:33:30 crc kubenswrapper[4799]: I0216 12:33:30.858637 4799 scope.go:117] "RemoveContainer" containerID="c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98" Feb 16 12:33:30 crc kubenswrapper[4799]: E0216 12:33:30.858958 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7j77r_openshift-multus(ff442c08-09db-4354-b9be-b43956019ba7)\"" pod="openshift-multus/multus-7j77r" podUID="ff442c08-09db-4354-b9be-b43956019ba7" Feb 16 12:33:31 crc kubenswrapper[4799]: I0216 12:33:31.149522 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:31 crc kubenswrapper[4799]: I0216 12:33:31.149650 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:31 crc kubenswrapper[4799]: E0216 12:33:31.150485 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:31 crc kubenswrapper[4799]: I0216 12:33:31.149687 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:31 crc kubenswrapper[4799]: I0216 12:33:31.149687 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:31 crc kubenswrapper[4799]: E0216 12:33:31.150806 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:31 crc kubenswrapper[4799]: E0216 12:33:31.150933 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:31 crc kubenswrapper[4799]: E0216 12:33:31.151118 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:31 crc kubenswrapper[4799]: I0216 12:33:31.865368 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/1.log" Feb 16 12:33:33 crc kubenswrapper[4799]: I0216 12:33:33.149092 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:33 crc kubenswrapper[4799]: I0216 12:33:33.149240 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:33 crc kubenswrapper[4799]: I0216 12:33:33.149328 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:33 crc kubenswrapper[4799]: I0216 12:33:33.149504 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:33 crc kubenswrapper[4799]: E0216 12:33:33.149693 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:33 crc kubenswrapper[4799]: E0216 12:33:33.149913 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:33 crc kubenswrapper[4799]: E0216 12:33:33.150086 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:33 crc kubenswrapper[4799]: E0216 12:33:33.150204 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:34 crc kubenswrapper[4799]: I0216 12:33:34.149497 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:33:34 crc kubenswrapper[4799]: E0216 12:33:34.149740 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzcq6_openshift-ovn-kubernetes(8ae13b0a-1f69-476d-a552-4467fcedac14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" Feb 16 12:33:35 crc kubenswrapper[4799]: E0216 12:33:35.099943 4799 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 12:33:35 crc kubenswrapper[4799]: I0216 12:33:35.149317 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:35 crc kubenswrapper[4799]: I0216 12:33:35.149366 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:35 crc kubenswrapper[4799]: E0216 12:33:35.150466 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:35 crc kubenswrapper[4799]: I0216 12:33:35.150489 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:35 crc kubenswrapper[4799]: E0216 12:33:35.151446 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:35 crc kubenswrapper[4799]: E0216 12:33:35.150756 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:35 crc kubenswrapper[4799]: I0216 12:33:35.150519 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:35 crc kubenswrapper[4799]: E0216 12:33:35.152274 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:35 crc kubenswrapper[4799]: E0216 12:33:35.283755 4799 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:33:37 crc kubenswrapper[4799]: I0216 12:33:37.149044 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:37 crc kubenswrapper[4799]: I0216 12:33:37.149224 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:37 crc kubenswrapper[4799]: I0216 12:33:37.149049 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:37 crc kubenswrapper[4799]: E0216 12:33:37.149372 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:37 crc kubenswrapper[4799]: I0216 12:33:37.149549 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:37 crc kubenswrapper[4799]: E0216 12:33:37.149555 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:37 crc kubenswrapper[4799]: E0216 12:33:37.149743 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:37 crc kubenswrapper[4799]: E0216 12:33:37.149801 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:39 crc kubenswrapper[4799]: I0216 12:33:39.149453 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:39 crc kubenswrapper[4799]: I0216 12:33:39.149525 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:39 crc kubenswrapper[4799]: E0216 12:33:39.149674 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:39 crc kubenswrapper[4799]: I0216 12:33:39.149763 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:39 crc kubenswrapper[4799]: I0216 12:33:39.149810 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:39 crc kubenswrapper[4799]: E0216 12:33:39.149946 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:39 crc kubenswrapper[4799]: E0216 12:33:39.150075 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:39 crc kubenswrapper[4799]: E0216 12:33:39.150255 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:40 crc kubenswrapper[4799]: E0216 12:33:40.284982 4799 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:33:41 crc kubenswrapper[4799]: I0216 12:33:41.148675 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:41 crc kubenswrapper[4799]: I0216 12:33:41.148704 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:41 crc kubenswrapper[4799]: E0216 12:33:41.148814 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:41 crc kubenswrapper[4799]: I0216 12:33:41.148892 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:41 crc kubenswrapper[4799]: I0216 12:33:41.149029 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:41 crc kubenswrapper[4799]: E0216 12:33:41.149082 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:41 crc kubenswrapper[4799]: E0216 12:33:41.149312 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:41 crc kubenswrapper[4799]: E0216 12:33:41.149668 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:43 crc kubenswrapper[4799]: I0216 12:33:43.148606 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:43 crc kubenswrapper[4799]: I0216 12:33:43.148732 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:43 crc kubenswrapper[4799]: I0216 12:33:43.148770 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:43 crc kubenswrapper[4799]: E0216 12:33:43.148782 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:43 crc kubenswrapper[4799]: E0216 12:33:43.149570 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:43 crc kubenswrapper[4799]: E0216 12:33:43.150165 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:43 crc kubenswrapper[4799]: I0216 12:33:43.150187 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:43 crc kubenswrapper[4799]: E0216 12:33:43.150455 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:45 crc kubenswrapper[4799]: I0216 12:33:45.149484 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:45 crc kubenswrapper[4799]: I0216 12:33:45.149497 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:45 crc kubenswrapper[4799]: I0216 12:33:45.149521 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:45 crc kubenswrapper[4799]: E0216 12:33:45.150827 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:45 crc kubenswrapper[4799]: I0216 12:33:45.150845 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:45 crc kubenswrapper[4799]: E0216 12:33:45.151154 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:45 crc kubenswrapper[4799]: E0216 12:33:45.150943 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:45 crc kubenswrapper[4799]: E0216 12:33:45.151010 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:45 crc kubenswrapper[4799]: E0216 12:33:45.286417 4799 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:33:46 crc kubenswrapper[4799]: I0216 12:33:46.149455 4799 scope.go:117] "RemoveContainer" containerID="c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98" Feb 16 12:33:46 crc kubenswrapper[4799]: I0216 12:33:46.936601 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/1.log" Feb 16 12:33:46 crc kubenswrapper[4799]: I0216 12:33:46.937174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerStarted","Data":"159c40eee1999c836def11b49d0de21c643e5b9140ecb4fc62683775c8af77a9"} Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.148755 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.148813 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.148836 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.148949 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:47 crc kubenswrapper[4799]: E0216 12:33:47.149672 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:47 crc kubenswrapper[4799]: E0216 12:33:47.149883 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:47 crc kubenswrapper[4799]: E0216 12:33:47.150207 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:47 crc kubenswrapper[4799]: E0216 12:33:47.150254 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.150660 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.942902 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/3.log" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.946447 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerStarted","Data":"b18518e791edc1176d193a389ef0578150e5064b7dbc957b4b036bceffdd11c2"} Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.947158 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:33:47 crc kubenswrapper[4799]: I0216 12:33:47.981067 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podStartSLOduration=111.981043341 podStartE2EDuration="1m51.981043341s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:33:47.980188198 +0000 UTC m=+133.573203572" watchObservedRunningTime="2026-02-16 12:33:47.981043341 +0000 UTC m=+133.574058675" Feb 16 12:33:48 crc kubenswrapper[4799]: I0216 12:33:48.162866 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2clkm"] Feb 16 12:33:48 crc kubenswrapper[4799]: I0216 12:33:48.163025 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:48 crc kubenswrapper[4799]: E0216 12:33:48.163203 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:49 crc kubenswrapper[4799]: I0216 12:33:49.149228 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:49 crc kubenswrapper[4799]: I0216 12:33:49.149300 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:49 crc kubenswrapper[4799]: E0216 12:33:49.149968 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:49 crc kubenswrapper[4799]: I0216 12:33:49.149316 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:49 crc kubenswrapper[4799]: E0216 12:33:49.150258 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:49 crc kubenswrapper[4799]: E0216 12:33:49.150378 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:50 crc kubenswrapper[4799]: I0216 12:33:50.148699 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:50 crc kubenswrapper[4799]: E0216 12:33:50.148977 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:50 crc kubenswrapper[4799]: E0216 12:33:50.288754 4799 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:33:51 crc kubenswrapper[4799]: I0216 12:33:51.148709 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:51 crc kubenswrapper[4799]: I0216 12:33:51.148774 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:51 crc kubenswrapper[4799]: I0216 12:33:51.148709 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:51 crc kubenswrapper[4799]: E0216 12:33:51.148864 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:51 crc kubenswrapper[4799]: E0216 12:33:51.148967 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:51 crc kubenswrapper[4799]: E0216 12:33:51.149073 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:52 crc kubenswrapper[4799]: I0216 12:33:52.149018 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:52 crc kubenswrapper[4799]: E0216 12:33:52.149302 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:53 crc kubenswrapper[4799]: I0216 12:33:53.148694 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:53 crc kubenswrapper[4799]: I0216 12:33:53.148762 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:53 crc kubenswrapper[4799]: E0216 12:33:53.148957 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:53 crc kubenswrapper[4799]: I0216 12:33:53.149031 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:53 crc kubenswrapper[4799]: E0216 12:33:53.149288 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:53 crc kubenswrapper[4799]: E0216 12:33:53.149503 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:54 crc kubenswrapper[4799]: I0216 12:33:54.148614 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:54 crc kubenswrapper[4799]: E0216 12:33:54.148880 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2clkm" podUID="e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd" Feb 16 12:33:55 crc kubenswrapper[4799]: I0216 12:33:55.148408 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:55 crc kubenswrapper[4799]: I0216 12:33:55.148448 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:55 crc kubenswrapper[4799]: I0216 12:33:55.148513 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:55 crc kubenswrapper[4799]: E0216 12:33:55.150315 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:33:55 crc kubenswrapper[4799]: E0216 12:33:55.150429 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:33:55 crc kubenswrapper[4799]: E0216 12:33:55.150556 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:33:56 crc kubenswrapper[4799]: I0216 12:33:56.148531 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:33:56 crc kubenswrapper[4799]: I0216 12:33:56.152334 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 12:33:56 crc kubenswrapper[4799]: I0216 12:33:56.153077 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.148496 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.148552 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.148701 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.152340 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.152375 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.152485 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 12:33:57 crc kubenswrapper[4799]: I0216 12:33:57.152537 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 12:34:02 crc kubenswrapper[4799]: I0216 12:34:02.913084 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:02 crc kubenswrapper[4799]: E0216 12:34:02.913319 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:36:04.913285997 +0000 UTC m=+270.506301341 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.014704 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.014776 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.014829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.014904 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.017869 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.024335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.038357 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.040019 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.173567 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.235385 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:34:03 crc kubenswrapper[4799]: I0216 12:34:03.243738 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:34:03 crc kubenswrapper[4799]: W0216 12:34:03.499604 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-8aa6f1c36b8dac93af970a560243823dcab6f6d771d4af977e6967fb46bee4f1 WatchSource:0}: Error finding container 8aa6f1c36b8dac93af970a560243823dcab6f6d771d4af977e6967fb46bee4f1: Status 404 returned error can't find the container with id 8aa6f1c36b8dac93af970a560243823dcab6f6d771d4af977e6967fb46bee4f1 Feb 16 12:34:03 crc kubenswrapper[4799]: W0216 12:34:03.516083 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-f03efe31d1da3d30ccfdb162b7302dbf8c768644bea55e8f64dbd532c7da6138 WatchSource:0}: Error finding container f03efe31d1da3d30ccfdb162b7302dbf8c768644bea55e8f64dbd532c7da6138: Status 404 returned error can't find the container with id f03efe31d1da3d30ccfdb162b7302dbf8c768644bea55e8f64dbd532c7da6138 Feb 16 12:34:03 crc kubenswrapper[4799]: W0216 12:34:03.550554 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d6dbe1a59ad0dfbbd4c432d5e84abfd2c142ba4a78ccc1f0ab98fcdfb2bc32e1 WatchSource:0}: Error finding container d6dbe1a59ad0dfbbd4c432d5e84abfd2c142ba4a78ccc1f0ab98fcdfb2bc32e1: Status 404 returned error can't find the container with id d6dbe1a59ad0dfbbd4c432d5e84abfd2c142ba4a78ccc1f0ab98fcdfb2bc32e1 Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.021572 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b7fedc00d6646406895c5d42c811ce980c67bd111f58227bf1d16532e5e7a57f"} Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.022240 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f03efe31d1da3d30ccfdb162b7302dbf8c768644bea55e8f64dbd532c7da6138"} Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.022479 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.022926 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2afb45f88b397d0eb6e4264374a3afb9f288a8b95720fd80dc10f37741528060"} Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.022957 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d6dbe1a59ad0dfbbd4c432d5e84abfd2c142ba4a78ccc1f0ab98fcdfb2bc32e1"} Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.025255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"20d1c220c892891c1e07f60bc86fc19fdec97c1ab7ea399b06996467edade8ef"} Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.025351 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8aa6f1c36b8dac93af970a560243823dcab6f6d771d4af977e6967fb46bee4f1"} Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.401589 4799 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.449137 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gm29d"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.449911 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.453188 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.453412 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.454354 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.454602 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.455049 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.455323 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.456507 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.456733 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.459555 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.461090 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.461658 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.466371 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-66brb"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.466588 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.466789 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6lds8"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.467088 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.467391 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.467634 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.475443 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.475519 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.475646 4799 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.475669 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.475776 4799 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.475794 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476254 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476310 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476421 4799 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476506 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476530 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476452 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476586 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476593 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476610 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476664 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476576 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476616 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476699 4799 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476703 4799 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476716 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476768 4799 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476757 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476787 4799 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476824 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476781 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476871 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476892 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476875 4799 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476923 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476534 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.476966 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.476998 4799 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477017 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477025 4799 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477047 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477041 4799 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477078 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477100 4799 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477144 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477176 4799 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477191 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477201 4799 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477217 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477232 4799 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477262 4799 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477263 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477277 4799 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477244 4799 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477295 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477278 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477310 4799 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477313 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477330 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: W0216 12:34:04.477434 4799 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 12:34:04 crc kubenswrapper[4799]: E0216 12:34:04.477462 4799 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.478345 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.479674 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.480973 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.481653 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.483024 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.483548 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.484455 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.484696 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.484876 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.484944 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.490438 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d2xlw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.491458 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.491857 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.491880 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.492465 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.492616 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4fmnw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.493205 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.495205 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.496077 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.497811 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.498022 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.498112 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.498484 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.499056 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.499420 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.499865 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.500080 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.505725 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-twvg6"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.506595 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.506679 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507410 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507455 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.508016 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507535 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507538 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507588 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507625 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.508396 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507735 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.507787 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.508524 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.508683 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.508334 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.508929 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.520876 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl8tw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.521968 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.547583 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.547735 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.547979 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.548316 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kkq5f"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.548456 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.548809 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.548935 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.550954 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.551284 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.551742 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.555220 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.556082 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.556383 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.556532 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.557270 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.557390 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-njdbl"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.558142 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.558694 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.559657 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560413 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560537 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560595 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560676 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560813 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560847 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.561006 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.560813 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.561307 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.561341 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.561312 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.563946 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-87s27"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.564570 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.564804 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.564937 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.564959 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.565458 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.566795 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.572249 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.572434 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6lds8"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.573555 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-df4xr"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.574156 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.574222 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.575556 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576319 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9w8\" (UniqueName: \"kubernetes.io/projected/95586541-b68e-489b-8c9c-73477d70f4dd-kube-api-access-lq9w8\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576551 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-config\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576579 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576598 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576616 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95586541-b68e-489b-8c9c-73477d70f4dd-audit-dir\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576633 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-encryption-config\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5cd50be-cad4-4fb3-8732-e870df15eb34-node-pullsecrets\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576664 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ngm\" (UniqueName: \"kubernetes.io/projected/12ef62d5-7675-44bf-a2e9-53093b004126-kube-api-access-57ngm\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576684 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576701 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576719 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-images\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576732 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-etcd-client\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576748 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-audit\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576763 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576782 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nnb\" (UniqueName: \"kubernetes.io/projected/6e9b7ea2-185b-443f-8aca-7286501b2a80-kube-api-access-k4nnb\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576797 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-etcd-serving-ca\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576816 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576834 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf7gh\" (UniqueName: \"kubernetes.io/projected/de5f2060-f162-4fac-b3ef-2acda638dfb6-kube-api-access-zf7gh\") pod \"downloads-7954f5f757-njdbl\" (UID: \"de5f2060-f162-4fac-b3ef-2acda638dfb6\") " pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576871 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69htr\" (UniqueName: \"kubernetes.io/projected/ea2b5f46-58b6-41f8-9985-85d5236568ef-kube-api-access-69htr\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576891 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-config\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-audit-policies\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576933 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576952 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.576979 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ef62d5-7675-44bf-a2e9-53093b004126-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577006 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-serving-cert\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577030 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-client\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577060 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5cd50be-cad4-4fb3-8732-e870df15eb34-audit-dir\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577084 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577108 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-image-import-ca\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577154 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdd8\" (UniqueName: \"kubernetes.io/projected/b5cd50be-cad4-4fb3-8732-e870df15eb34-kube-api-access-sqdd8\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577179 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-serving-cert\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577202 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-encryption-config\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.577225 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.583466 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.585957 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.586755 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.587751 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.588297 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.588404 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.588658 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.603180 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.611080 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.611312 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.612863 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.619023 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.628881 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.629565 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.631169 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.631687 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.632270 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.632818 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.633078 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.636488 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.641624 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.642436 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.643197 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.643541 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.647194 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.648285 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.648647 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6w2wm"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.649342 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.649840 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d2xlw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.651380 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.652371 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.652648 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-79mk5"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.653218 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.655756 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4fmnw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.655811 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n9qrr"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.656347 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cbjpn"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.656472 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.657356 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.658445 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.658695 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wrg52"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.659193 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.660854 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.662019 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.663389 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nwzhj"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.664016 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.665181 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.665640 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.666828 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.667243 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.669301 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-77v9m"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.675438 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.677982 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ef62d5-7675-44bf-a2e9-53093b004126-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678020 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-serving-cert\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678062 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-client\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678089 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5cd50be-cad4-4fb3-8732-e870df15eb34-audit-dir\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678179 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-image-import-ca\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678198 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdd8\" (UniqueName: \"kubernetes.io/projected/b5cd50be-cad4-4fb3-8732-e870df15eb34-kube-api-access-sqdd8\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678215 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-serving-cert\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678232 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-encryption-config\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678250 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678273 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq9w8\" (UniqueName: \"kubernetes.io/projected/95586541-b68e-489b-8c9c-73477d70f4dd-kube-api-access-lq9w8\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678301 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-config\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678325 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678347 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95586541-b68e-489b-8c9c-73477d70f4dd-audit-dir\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678373 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678396 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-encryption-config\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678418 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5cd50be-cad4-4fb3-8732-e870df15eb34-node-pullsecrets\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678442 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ngm\" (UniqueName: \"kubernetes.io/projected/12ef62d5-7675-44bf-a2e9-53093b004126-kube-api-access-57ngm\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678468 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678493 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678515 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-images\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-etcd-client\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678597 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-audit\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678624 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nnb\" (UniqueName: \"kubernetes.io/projected/6e9b7ea2-185b-443f-8aca-7286501b2a80-kube-api-access-k4nnb\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-etcd-serving-ca\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678691 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf7gh\" (UniqueName: \"kubernetes.io/projected/de5f2060-f162-4fac-b3ef-2acda638dfb6-kube-api-access-zf7gh\") pod \"downloads-7954f5f757-njdbl\" (UID: \"de5f2060-f162-4fac-b3ef-2acda638dfb6\") " pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678751 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69htr\" (UniqueName: \"kubernetes.io/projected/ea2b5f46-58b6-41f8-9985-85d5236568ef-kube-api-access-69htr\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678774 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-config\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678796 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-audit-policies\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.678843 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.680239 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5cd50be-cad4-4fb3-8732-e870df15eb34-node-pullsecrets\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.680345 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5cd50be-cad4-4fb3-8732-e870df15eb34-audit-dir\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.681152 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-image-import-ca\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.681484 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95586541-b68e-489b-8c9c-73477d70f4dd-audit-dir\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.681595 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.682961 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nkghs"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.683000 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-serving-cert\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.683608 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-etcd-serving-ca\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.684103 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-config\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.684347 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-encryption-config\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.684396 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-klhdd"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.684556 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.684909 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.684987 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.685752 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.690057 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5cd50be-cad4-4fb3-8732-e870df15eb34-etcd-client\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.690158 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kkq5f"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.690198 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.690502 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.691651 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-df4xr"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.694212 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl8tw"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.694870 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5cd50be-cad4-4fb3-8732-e870df15eb34-audit\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.695543 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-twvg6"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.698551 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.698595 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-66brb"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.698606 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-njdbl"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.704045 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.708200 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.713247 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.713795 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.719587 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.727644 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.730693 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gm29d"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.732901 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cbjpn"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.734640 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ndp46"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.736297 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.737566 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wx2hx"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.738723 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.739933 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.742559 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-87s27"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.743775 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.744910 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.745953 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.747073 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.748166 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6w2wm"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.751482 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n9qrr"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.753115 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.753287 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.754665 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-79mk5"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.756705 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nkghs"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.762505 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ndp46"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.765384 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.766965 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-77v9m"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.768828 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wrg52"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.769880 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-klhdd"] Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.773880 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.792926 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.812392 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.832496 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.852147 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.873615 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.893456 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.913304 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.934310 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.952500 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.972225 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 12:34:04 crc kubenswrapper[4799]: I0216 12:34:04.992277 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.013695 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.033786 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.052273 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.074083 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.092673 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.112780 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.133991 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.153381 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.172485 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.192723 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.212861 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.234040 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.252899 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.273568 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.294244 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.316915 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.333271 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.353177 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.373872 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.393363 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.413397 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.432869 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.454654 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.474248 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.492334 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.511865 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.533504 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.553853 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.572502 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.594024 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.613360 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.633760 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.652771 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.671448 4799 request.go:700] Waited for 1.013773147s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0 Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.673426 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680088 4799 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680164 4799 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680261 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12ef62d5-7675-44bf-a2e9-53093b004126-machine-api-operator-tls podName:12ef62d5-7675-44bf-a2e9-53093b004126 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.18022114 +0000 UTC m=+151.773236514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/12ef62d5-7675-44bf-a2e9-53093b004126-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-6lds8" (UID: "12ef62d5-7675-44bf-a2e9-53093b004126") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680308 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert podName:ea2b5f46-58b6-41f8-9985-85d5236568ef nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.180287922 +0000 UTC m=+151.773303296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert") pod "controller-manager-879f6c89f-66brb" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680378 4799 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680466 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-client podName:95586541-b68e-489b-8c9c-73477d70f4dd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.180447336 +0000 UTC m=+151.773462700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-client") pod "apiserver-7bbb656c7d-vdgfq" (UID: "95586541-b68e-489b-8c9c-73477d70f4dd") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680594 4799 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.680669 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config podName:6e9b7ea2-185b-443f-8aca-7286501b2a80 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.180649162 +0000 UTC m=+151.773664546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config") pod "route-controller-manager-6576b87f9c-sx8cs" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682337 4799 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682498 4799 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682645 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-trusted-ca-bundle podName:95586541-b68e-489b-8c9c-73477d70f4dd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.182597955 +0000 UTC m=+151.775613299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-trusted-ca-bundle") pod "apiserver-7bbb656c7d-vdgfq" (UID: "95586541-b68e-489b-8c9c-73477d70f4dd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682387 4799 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682409 4799 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682420 4799 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682445 4799 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.682887 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles podName:ea2b5f46-58b6-41f8-9985-85d5236568ef nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.182739329 +0000 UTC m=+151.775754863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles") pod "controller-manager-879f6c89f-66brb" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683004 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-serving-cert podName:95586541-b68e-489b-8c9c-73477d70f4dd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.182977905 +0000 UTC m=+151.775993429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-serving-cert") pod "apiserver-7bbb656c7d-vdgfq" (UID: "95586541-b68e-489b-8c9c-73477d70f4dd") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683095 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-config podName:12ef62d5-7675-44bf-a2e9-53093b004126 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183076138 +0000 UTC m=+151.776091692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-config") pod "machine-api-operator-5694c8668f-6lds8" (UID: "12ef62d5-7675-44bf-a2e9-53093b004126") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683213 4799 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683275 4799 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683292 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert podName:6e9b7ea2-185b-443f-8aca-7286501b2a80 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183270563 +0000 UTC m=+151.776286117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert") pod "route-controller-manager-6576b87f9c-sx8cs" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683375 4799 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683383 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca podName:6e9b7ea2-185b-443f-8aca-7286501b2a80 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183309254 +0000 UTC m=+151.776324618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca") pod "route-controller-manager-6576b87f9c-sx8cs" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683463 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-encryption-config podName:95586541-b68e-489b-8c9c-73477d70f4dd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183443858 +0000 UTC m=+151.776459402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-encryption-config") pod "apiserver-7bbb656c7d-vdgfq" (UID: "95586541-b68e-489b-8c9c-73477d70f4dd") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683501 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-images podName:12ef62d5-7675-44bf-a2e9-53093b004126 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183485639 +0000 UTC m=+151.776501213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-images") pod "machine-api-operator-5694c8668f-6lds8" (UID: "12ef62d5-7675-44bf-a2e9-53093b004126") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683562 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config podName:ea2b5f46-58b6-41f8-9985-85d5236568ef nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183546741 +0000 UTC m=+151.776562075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config") pod "controller-manager-879f6c89f-66brb" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683560 4799 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683650 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca podName:ea2b5f46-58b6-41f8-9985-85d5236568ef nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183634033 +0000 UTC m=+151.776649407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca") pod "controller-manager-879f6c89f-66brb" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683563 4799 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683722 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-audit-policies podName:95586541-b68e-489b-8c9c-73477d70f4dd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183708995 +0000 UTC m=+151.776724369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-audit-policies") pod "apiserver-7bbb656c7d-vdgfq" (UID: "95586541-b68e-489b-8c9c-73477d70f4dd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683772 4799 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: E0216 12:34:05.683811 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-serving-ca podName:95586541-b68e-489b-8c9c-73477d70f4dd nodeName:}" failed. No retries permitted until 2026-02-16 12:34:06.183799188 +0000 UTC m=+151.776814562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-serving-ca") pod "apiserver-7bbb656c7d-vdgfq" (UID: "95586541-b68e-489b-8c9c-73477d70f4dd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.692910 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.711848 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.733797 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.752954 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.772649 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.793652 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.813107 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.833409 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.865033 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.873174 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.892943 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.914079 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.932759 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.961664 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.973217 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 12:34:05 crc kubenswrapper[4799]: I0216 12:34:05.992363 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.012914 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.034485 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.053449 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.071948 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.092446 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.112393 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.133408 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.153410 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.172392 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.196779 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ef62d5-7675-44bf-a2e9-53093b004126-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.196902 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-client\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.196953 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197052 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-serving-cert\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-encryption-config\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197153 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197218 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-config\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197311 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197365 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197446 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197499 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197551 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-images\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197689 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197848 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-audit-policies\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197901 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.197952 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.213226 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdd8\" (UniqueName: \"kubernetes.io/projected/b5cd50be-cad4-4fb3-8732-e870df15eb34-kube-api-access-sqdd8\") pod \"apiserver-76f77b778f-gm29d\" (UID: \"b5cd50be-cad4-4fb3-8732-e870df15eb34\") " pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.235043 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.253170 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.274384 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.293701 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.312935 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.314581 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.412847 4799 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.435246 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.439448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf7gh\" (UniqueName: \"kubernetes.io/projected/de5f2060-f162-4fac-b3ef-2acda638dfb6-kube-api-access-zf7gh\") pod \"downloads-7954f5f757-njdbl\" (UID: \"de5f2060-f162-4fac-b3ef-2acda638dfb6\") " pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.456062 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.474117 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.492968 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.513837 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.533648 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.561096 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gm29d"] Feb 16 12:34:06 crc kubenswrapper[4799]: W0216 12:34:06.575818 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5cd50be_cad4_4fb3_8732_e870df15eb34.slice/crio-b32d5da9abfd268efbcaca556f09641ebc3a97eeac3305e7a84433d65b4f5ca1 WatchSource:0}: Error finding container b32d5da9abfd268efbcaca556f09641ebc3a97eeac3305e7a84433d65b4f5ca1: Status 404 returned error can't find the container with id b32d5da9abfd268efbcaca556f09641ebc3a97eeac3305e7a84433d65b4f5ca1 Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.577396 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.594148 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604151 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x554j\" (UniqueName: \"kubernetes.io/projected/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-kube-api-access-x554j\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604248 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604270 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604298 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2w9n\" (UniqueName: \"kubernetes.io/projected/98bb2e4c-5ed3-4d64-b732-e740b80883f5-kube-api-access-g2w9n\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604461 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-trusted-ca-bundle\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604538 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/744cf3fb-c9e3-442c-bb38-077980637b60-available-featuregates\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604614 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604657 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604710 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-certificates\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-trusted-ca\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604820 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604867 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45d1c9c3-e345-4470-8116-8d842f9eb227-auth-proxy-config\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67094e0b-8edb-4b4f-aed3-a704b0854384-installation-pull-secrets\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.604956 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-auth-proxy-config\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605066 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjth\" (UniqueName: \"kubernetes.io/projected/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-kube-api-access-8gjth\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605183 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzq8c\" (UniqueName: \"kubernetes.io/projected/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-kube-api-access-hzq8c\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605235 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-policies\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605283 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrnd\" (UniqueName: \"kubernetes.io/projected/45d1c9c3-e345-4470-8116-8d842f9eb227-kube-api-access-9rrnd\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605333 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9fqz\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-kube-api-access-p9fqz\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605379 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff3de50-8fb5-4734-b830-c401b052662a-serving-cert\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605424 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-serving-cert\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605469 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744cf3fb-c9e3-442c-bb38-077980637b60-serving-cert\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605517 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfh5\" (UniqueName: \"kubernetes.io/projected/744cf3fb-c9e3-442c-bb38-077980637b60-kube-api-access-9wfh5\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605611 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49b44f5c-8d79-4192-998a-c303333cff67-serving-cert\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605655 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-service-ca\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605700 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605746 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa4da88-9190-4296-b322-c78c18b1f1ac-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605815 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/825cb96e-cef9-4d1a-952b-5f97b639d1e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-59rnx\" (UID: \"825cb96e-cef9-4d1a-952b-5f97b639d1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605866 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605919 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgndt\" (UniqueName: \"kubernetes.io/projected/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-kube-api-access-pgndt\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.605966 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9g7\" (UniqueName: \"kubernetes.io/projected/9ff3de50-8fb5-4734-b830-c401b052662a-kube-api-access-lp9g7\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-oauth-config\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606072 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606240 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-proxy-tls\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606277 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606308 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d1c9c3-e345-4470-8116-8d842f9eb227-config\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606356 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-config\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49b44f5c-8d79-4192-998a-c303333cff67-trusted-ca\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606435 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vhq\" (UniqueName: \"kubernetes.io/projected/49b44f5c-8d79-4192-998a-c303333cff67-kube-api-access-m8vhq\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606472 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-bound-sa-token\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606503 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b44f5c-8d79-4192-998a-c303333cff67-config\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606535 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4m9n\" (UniqueName: \"kubernetes.io/projected/ffa4da88-9190-4296-b322-c78c18b1f1ac-kube-api-access-b4m9n\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-tls\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606600 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-images\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606633 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa4da88-9190-4296-b322-c78c18b1f1ac-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-service-ca-bundle\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606778 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-dir\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606808 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606883 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606921 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606954 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtpx\" (UniqueName: \"kubernetes.io/projected/825cb96e-cef9-4d1a-952b-5f97b639d1e6-kube-api-access-lbtpx\") pod \"cluster-samples-operator-665b6dd947-59rnx\" (UID: \"825cb96e-cef9-4d1a-952b-5f97b639d1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.606989 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607020 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45d1c9c3-e345-4470-8116-8d842f9eb227-machine-approver-tls\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607054 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-oauth-serving-cert\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607084 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607114 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607183 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67094e0b-8edb-4b4f-aed3-a704b0854384-ca-trust-extracted\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607213 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-config\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607247 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.607294 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.607764 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.107742887 +0000 UTC m=+152.700758431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.613298 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.633204 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.655259 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.673580 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.691170 4799 request.go:700] Waited for 1.952033548s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.694226 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708389 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708606 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-policies\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708635 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrnd\" (UniqueName: \"kubernetes.io/projected/45d1c9c3-e345-4470-8116-8d842f9eb227-kube-api-access-9rrnd\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708657 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9fqz\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-kube-api-access-p9fqz\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708677 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-serving-cert\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-mountpoint-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708731 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-config\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708750 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-service-ca\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708770 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa4da88-9190-4296-b322-c78c18b1f1ac-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708790 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-plugins-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708808 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/495ad454-0421-4d3a-9488-8923702281c2-metrics-tls\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708839 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/825cb96e-cef9-4d1a-952b-5f97b639d1e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-59rnx\" (UID: \"825cb96e-cef9-4d1a-952b-5f97b639d1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708859 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708878 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9g7\" (UniqueName: \"kubernetes.io/projected/9ff3de50-8fb5-4734-b830-c401b052662a-kube-api-access-lp9g7\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708898 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-oauth-config\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708921 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjlsk\" (UniqueName: \"kubernetes.io/projected/a6d10e0e-6088-4be2-90a6-5ea568d7ce25-kube-api-access-vjlsk\") pod \"control-plane-machine-set-operator-78cbb6b69f-swx86\" (UID: \"a6d10e0e-6088-4be2-90a6-5ea568d7ce25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708942 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgrn\" (UniqueName: \"kubernetes.io/projected/e1034942-eeca-4ab3-a189-32674858ffac-kube-api-access-dsgrn\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708965 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-signing-key\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.708990 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0006bec3-f6dc-4496-aca4-3c330d0db8ab-proxy-tls\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709010 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d46c8684-5e51-4f95-8a90-68e76d701a6a-config-volume\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709091 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d1c9c3-e345-4470-8116-8d842f9eb227-config\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709114 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-webhook-cert\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709155 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd68d\" (UniqueName: \"kubernetes.io/projected/d32fb0f0-4200-401b-803e-a52704008663-kube-api-access-cd68d\") pod \"multus-admission-controller-857f4d67dd-6w2wm\" (UID: \"d32fb0f0-4200-401b-803e-a52704008663\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709176 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rnc\" (UniqueName: \"kubernetes.io/projected/61eb4af7-4b1d-4f7f-a037-f8b48e40fca7-kube-api-access-j6rnc\") pod \"ingress-canary-klhdd\" (UID: \"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7\") " pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709201 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-socket-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.709253 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.209207823 +0000 UTC m=+152.802223217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709354 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-config\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709406 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-node-bootstrap-token\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709454 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljzzf\" (UniqueName: \"kubernetes.io/projected/7afe030c-130b-4547-b2b2-bdeb076b3d51-kube-api-access-ljzzf\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709490 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-certs\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709520 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6f0cb13-521b-4df1-bf03-f9161042d3d9-metrics-tls\") pod \"dns-operator-744455d44c-cbjpn\" (UID: \"d6f0cb13-521b-4df1-bf03-f9161042d3d9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709567 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b44f5c-8d79-4192-998a-c303333cff67-config\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7dl8\" (UniqueName: \"kubernetes.io/projected/3cf88c98-4151-445d-918e-8b31e853f3f8-kube-api-access-z7dl8\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709640 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-tls\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709689 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1034942-eeca-4ab3-a189-32674858ffac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709726 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-default-certificate\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709755 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plm95\" (UniqueName: \"kubernetes.io/projected/63c1f2e4-699c-432e-af51-332bb6e33ba0-kube-api-access-plm95\") pod \"package-server-manager-789f6589d5-5p95v\" (UID: \"63c1f2e4-699c-432e-af51-332bb6e33ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709786 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60cca0b1-26dd-4ae6-a8df-921ad61f7732-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709818 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4stpz\" (UniqueName: \"kubernetes.io/projected/0006bec3-f6dc-4496-aca4-3c330d0db8ab-kube-api-access-4stpz\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709856 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-dir\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709890 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709921 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.709953 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710010 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbtpx\" (UniqueName: \"kubernetes.io/projected/825cb96e-cef9-4d1a-952b-5f97b639d1e6-kube-api-access-lbtpx\") pod \"cluster-samples-operator-665b6dd947-59rnx\" (UID: \"825cb96e-cef9-4d1a-952b-5f97b639d1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710045 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710078 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45d1c9c3-e345-4470-8116-8d842f9eb227-machine-approver-tls\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-oauth-serving-cert\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710167 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710209 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-ca\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710270 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x554j\" (UniqueName: \"kubernetes.io/projected/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-kube-api-access-x554j\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710404 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2w9n\" (UniqueName: \"kubernetes.io/projected/98bb2e4c-5ed3-4d64-b732-e740b80883f5-kube-api-access-g2w9n\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0006bec3-f6dc-4496-aca4-3c330d0db8ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710494 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-trusted-ca-bundle\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710530 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-client\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710567 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710611 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710651 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710687 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-serving-cert\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710725 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-trusted-ca\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710761 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710796 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr6z5\" (UniqueName: \"kubernetes.io/projected/466f6a49-c784-4e32-bc06-fc31fe8bdac4-kube-api-access-lr6z5\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67094e0b-8edb-4b4f-aed3-a704b0854384-installation-pull-secrets\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710866 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-stats-auth\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710896 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75lp\" (UniqueName: \"kubernetes.io/projected/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-kube-api-access-m75lp\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710930 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5t5\" (UniqueName: \"kubernetes.io/projected/62c4c2ac-a865-431e-9bce-4e69e7054888-kube-api-access-bf5t5\") pod \"migrator-59844c95c7-cp4k6\" (UID: \"62c4c2ac-a865-431e-9bce-4e69e7054888\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.710978 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjd7\" (UniqueName: \"kubernetes.io/projected/d46c8684-5e51-4f95-8a90-68e76d701a6a-kube-api-access-rqjd7\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711022 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff3de50-8fb5-4734-b830-c401b052662a-serving-cert\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744cf3fb-c9e3-442c-bb38-077980637b60-serving-cert\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711092 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfh5\" (UniqueName: \"kubernetes.io/projected/744cf3fb-c9e3-442c-bb38-077980637b60-kube-api-access-9wfh5\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711148 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711188 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cf88c98-4151-445d-918e-8b31e853f3f8-service-ca-bundle\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49b44f5c-8d79-4192-998a-c303333cff67-serving-cert\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711294 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r65w4\" (UniqueName: \"kubernetes.io/projected/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-kube-api-access-r65w4\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711330 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgndt\" (UniqueName: \"kubernetes.io/projected/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-kube-api-access-pgndt\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711377 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711411 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndf8\" (UniqueName: \"kubernetes.io/projected/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-kube-api-access-cndf8\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/466f6a49-c784-4e32-bc06-fc31fe8bdac4-srv-cert\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711489 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-signing-cabundle\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711527 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-proxy-tls\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711560 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-service-ca\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711625 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-secret-volume\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711662 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-config-volume\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711696 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b496h\" (UniqueName: \"kubernetes.io/projected/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-kube-api-access-b496h\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711734 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49b44f5c-8d79-4192-998a-c303333cff67-trusted-ca\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711770 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6v8z\" (UniqueName: \"kubernetes.io/projected/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-kube-api-access-b6v8z\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vhq\" (UniqueName: \"kubernetes.io/projected/49b44f5c-8d79-4192-998a-c303333cff67-kube-api-access-m8vhq\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711852 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-bound-sa-token\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4m9n\" (UniqueName: \"kubernetes.io/projected/ffa4da88-9190-4296-b322-c78c18b1f1ac-kube-api-access-b4m9n\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711922 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-tmpfs\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711956 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-images\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.711995 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712029 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa4da88-9190-4296-b322-c78c18b1f1ac-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712062 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afe030c-130b-4547-b2b2-bdeb076b3d51-serving-cert\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712097 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cca0b1-26dd-4ae6-a8df-921ad61f7732-config\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-service-ca-bundle\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712195 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712233 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdvm\" (UniqueName: \"kubernetes.io/projected/a06b895d-be38-4663-b92c-172f8a2bbe9d-kube-api-access-wwdvm\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712310 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-config\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712341 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-apiservice-cert\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712375 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8hn\" (UniqueName: \"kubernetes.io/projected/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-kube-api-access-vt8hn\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712444 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1034942-eeca-4ab3-a189-32674858ffac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712478 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67094e0b-8edb-4b4f-aed3-a704b0854384-ca-trust-extracted\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712512 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-config\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712544 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712580 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-config\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712612 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dkm\" (UniqueName: \"kubernetes.io/projected/d6f0cb13-521b-4df1-bf03-f9161042d3d9-kube-api-access-d6dkm\") pod \"dns-operator-744455d44c-cbjpn\" (UID: \"d6f0cb13-521b-4df1-bf03-f9161042d3d9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61eb4af7-4b1d-4f7f-a037-f8b48e40fca7-cert\") pod \"ingress-canary-klhdd\" (UID: \"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7\") " pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712713 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rbg\" (UniqueName: \"kubernetes.io/projected/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-kube-api-access-r2rbg\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712745 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63c1f2e4-699c-432e-af51-332bb6e33ba0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5p95v\" (UID: \"63c1f2e4-699c-432e-af51-332bb6e33ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712783 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495ad454-0421-4d3a-9488-8923702281c2-trusted-ca\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712838 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712881 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d10e0e-6088-4be2-90a6-5ea568d7ce25-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-swx86\" (UID: \"a6d10e0e-6088-4be2-90a6-5ea568d7ce25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712916 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-csi-data-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712946 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/466f6a49-c784-4e32-bc06-fc31fe8bdac4-profile-collector-cert\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.712976 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713009 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-metrics-certs\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713062 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dx7\" (UniqueName: \"kubernetes.io/projected/495ad454-0421-4d3a-9488-8923702281c2-kube-api-access-h7dx7\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713095 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495ad454-0421-4d3a-9488-8923702281c2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713173 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/744cf3fb-c9e3-442c-bb38-077980637b60-available-featuregates\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713207 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60cca0b1-26dd-4ae6-a8df-921ad61f7732-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713263 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713295 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d46c8684-5e51-4f95-8a90-68e76d701a6a-metrics-tls\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713327 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-registration-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713372 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-certificates\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713406 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713465 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45d1c9c3-e345-4470-8116-8d842f9eb227-auth-proxy-config\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713502 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d32fb0f0-4200-401b-803e-a52704008663-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6w2wm\" (UID: \"d32fb0f0-4200-401b-803e-a52704008663\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713536 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-auth-proxy-config\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gjth\" (UniqueName: \"kubernetes.io/projected/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-kube-api-access-8gjth\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713603 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzq8c\" (UniqueName: \"kubernetes.io/projected/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-kube-api-access-hzq8c\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.713635 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-srv-cert\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.714627 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/825cb96e-cef9-4d1a-952b-5f97b639d1e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-59rnx\" (UID: \"825cb96e-cef9-4d1a-952b-5f97b639d1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.715176 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d1c9c3-e345-4470-8116-8d842f9eb227-config\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.716462 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67094e0b-8edb-4b4f-aed3-a704b0854384-ca-trust-extracted\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.716607 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa4da88-9190-4296-b322-c78c18b1f1ac-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.717509 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-images\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.717579 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.717899 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-policies\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.717960 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.718048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.718078 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.718404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.719152 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-config\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.720344 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-trusted-ca-bundle\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.720942 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.721398 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.721707 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff3de50-8fb5-4734-b830-c401b052662a-serving-cert\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.722423 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.723252 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-trusted-ca\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.723448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-service-ca-bundle\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.723646 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67094e0b-8edb-4b4f-aed3-a704b0854384-installation-pull-secrets\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.723735 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.723922 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49b44f5c-8d79-4192-998a-c303333cff67-trusted-ca\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.724664 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/744cf3fb-c9e3-442c-bb38-077980637b60-available-featuregates\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.725025 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-proxy-tls\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.725213 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.22518836 +0000 UTC m=+152.818203924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.725498 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-tls\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.725695 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-dir\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.726468 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-config\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.726824 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.727230 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49b44f5c-8d79-4192-998a-c303333cff67-serving-cert\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.727988 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-certificates\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.727992 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.728386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.728762 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-oauth-serving-cert\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.728902 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49b44f5c-8d79-4192-998a-c303333cff67-config\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.729673 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45d1c9c3-e345-4470-8116-8d842f9eb227-machine-approver-tls\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.729714 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-auth-proxy-config\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.731586 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45d1c9c3-e345-4470-8116-8d842f9eb227-auth-proxy-config\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.731894 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff3de50-8fb5-4734-b830-c401b052662a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.732042 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.732568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/744cf3fb-c9e3-442c-bb38-077980637b60-serving-cert\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.732926 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.733352 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/12ef62d5-7675-44bf-a2e9-53093b004126-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.734294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-oauth-config\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.734373 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.734555 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.734664 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.734993 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.734985 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa4da88-9190-4296-b322-c78c18b1f1ac-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.735495 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-service-ca\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.738590 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-serving-cert\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.739938 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.758040 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.765980 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-njdbl"] Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.773314 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.793930 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.803417 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nnb\" (UniqueName: \"kubernetes.io/projected/6e9b7ea2-185b-443f-8aca-7286501b2a80-kube-api-access-k4nnb\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.813929 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814483 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.814627 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.314604477 +0000 UTC m=+152.907619821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814788 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d46c8684-5e51-4f95-8a90-68e76d701a6a-config-volume\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814837 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814884 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-webhook-cert\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814924 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd68d\" (UniqueName: \"kubernetes.io/projected/d32fb0f0-4200-401b-803e-a52704008663-kube-api-access-cd68d\") pod \"multus-admission-controller-857f4d67dd-6w2wm\" (UID: \"d32fb0f0-4200-401b-803e-a52704008663\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814960 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljzzf\" (UniqueName: \"kubernetes.io/projected/7afe030c-130b-4547-b2b2-bdeb076b3d51-kube-api-access-ljzzf\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.814999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6rnc\" (UniqueName: \"kubernetes.io/projected/61eb4af7-4b1d-4f7f-a037-f8b48e40fca7-kube-api-access-j6rnc\") pod \"ingress-canary-klhdd\" (UID: \"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7\") " pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.815034 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-socket-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.815080 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-node-bootstrap-token\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.815117 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-certs\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.815176 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6f0cb13-521b-4df1-bf03-f9161042d3d9-metrics-tls\") pod \"dns-operator-744455d44c-cbjpn\" (UID: \"d6f0cb13-521b-4df1-bf03-f9161042d3d9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.815583 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-socket-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816021 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7dl8\" (UniqueName: \"kubernetes.io/projected/3cf88c98-4151-445d-918e-8b31e853f3f8-kube-api-access-z7dl8\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816176 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1034942-eeca-4ab3-a189-32674858ffac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816247 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-default-certificate\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816348 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plm95\" (UniqueName: \"kubernetes.io/projected/63c1f2e4-699c-432e-af51-332bb6e33ba0-kube-api-access-plm95\") pod \"package-server-manager-789f6589d5-5p95v\" (UID: \"63c1f2e4-699c-432e-af51-332bb6e33ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816438 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60cca0b1-26dd-4ae6-a8df-921ad61f7732-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816508 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4stpz\" (UniqueName: \"kubernetes.io/projected/0006bec3-f6dc-4496-aca4-3c330d0db8ab-kube-api-access-4stpz\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816608 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-ca\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816613 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d46c8684-5e51-4f95-8a90-68e76d701a6a-config-volume\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816762 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0006bec3-f6dc-4496-aca4-3c330d0db8ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816896 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-client\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.816971 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817039 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817227 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-serving-cert\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817266 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-ca\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817296 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr6z5\" (UniqueName: \"kubernetes.io/projected/466f6a49-c784-4e32-bc06-fc31fe8bdac4-kube-api-access-lr6z5\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817361 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-stats-auth\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75lp\" (UniqueName: \"kubernetes.io/projected/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-kube-api-access-m75lp\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5t5\" (UniqueName: \"kubernetes.io/projected/62c4c2ac-a865-431e-9bce-4e69e7054888-kube-api-access-bf5t5\") pod \"migrator-59844c95c7-cp4k6\" (UID: \"62c4c2ac-a865-431e-9bce-4e69e7054888\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqjd7\" (UniqueName: \"kubernetes.io/projected/d46c8684-5e51-4f95-8a90-68e76d701a6a-kube-api-access-rqjd7\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817657 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cf88c98-4151-445d-918e-8b31e853f3f8-service-ca-bundle\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817738 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r65w4\" (UniqueName: \"kubernetes.io/projected/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-kube-api-access-r65w4\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817812 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndf8\" (UniqueName: \"kubernetes.io/projected/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-kube-api-access-cndf8\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817859 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/466f6a49-c784-4e32-bc06-fc31fe8bdac4-srv-cert\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-signing-cabundle\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.817961 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-secret-volume\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818003 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-service-ca\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-config-volume\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818100 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b496h\" (UniqueName: \"kubernetes.io/projected/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-kube-api-access-b496h\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6v8z\" (UniqueName: \"kubernetes.io/projected/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-kube-api-access-b6v8z\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818280 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-tmpfs\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818333 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afe030c-130b-4547-b2b2-bdeb076b3d51-serving-cert\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818376 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cca0b1-26dd-4ae6-a8df-921ad61f7732-config\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818387 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0006bec3-f6dc-4496-aca4-3c330d0db8ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818412 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818421 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwdvm\" (UniqueName: \"kubernetes.io/projected/a06b895d-be38-4663-b92c-172f8a2bbe9d-kube-api-access-wwdvm\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818536 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-config\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818556 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-apiservice-cert\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818578 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt8hn\" (UniqueName: \"kubernetes.io/projected/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-kube-api-access-vt8hn\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818601 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1034942-eeca-4ab3-a189-32674858ffac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818640 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-config\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818660 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6dkm\" (UniqueName: \"kubernetes.io/projected/d6f0cb13-521b-4df1-bf03-f9161042d3d9-kube-api-access-d6dkm\") pod \"dns-operator-744455d44c-cbjpn\" (UID: \"d6f0cb13-521b-4df1-bf03-f9161042d3d9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818688 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63c1f2e4-699c-432e-af51-332bb6e33ba0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5p95v\" (UID: \"63c1f2e4-699c-432e-af51-332bb6e33ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818720 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61eb4af7-4b1d-4f7f-a037-f8b48e40fca7-cert\") pod \"ingress-canary-klhdd\" (UID: \"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7\") " pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2rbg\" (UniqueName: \"kubernetes.io/projected/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-kube-api-access-r2rbg\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818775 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/466f6a49-c784-4e32-bc06-fc31fe8bdac4-profile-collector-cert\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495ad454-0421-4d3a-9488-8923702281c2-trusted-ca\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818857 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d10e0e-6088-4be2-90a6-5ea568d7ce25-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-swx86\" (UID: \"a6d10e0e-6088-4be2-90a6-5ea568d7ce25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818879 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-csi-data-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818902 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818925 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-metrics-certs\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818948 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7dx7\" (UniqueName: \"kubernetes.io/projected/495ad454-0421-4d3a-9488-8923702281c2-kube-api-access-h7dx7\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818968 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495ad454-0421-4d3a-9488-8923702281c2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.818990 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60cca0b1-26dd-4ae6-a8df-921ad61f7732-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819001 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6f0cb13-521b-4df1-bf03-f9161042d3d9-metrics-tls\") pod \"dns-operator-744455d44c-cbjpn\" (UID: \"d6f0cb13-521b-4df1-bf03-f9161042d3d9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819042 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819068 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d46c8684-5e51-4f95-8a90-68e76d701a6a-metrics-tls\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819088 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-registration-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819141 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d32fb0f0-4200-401b-803e-a52704008663-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6w2wm\" (UID: \"d32fb0f0-4200-401b-803e-a52704008663\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819180 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-srv-cert\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819223 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-mountpoint-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819244 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-config\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819268 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-plugins-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819286 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/495ad454-0421-4d3a-9488-8923702281c2-metrics-tls\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819338 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjlsk\" (UniqueName: \"kubernetes.io/projected/a6d10e0e-6088-4be2-90a6-5ea568d7ce25-kube-api-access-vjlsk\") pod \"control-plane-machine-set-operator-78cbb6b69f-swx86\" (UID: \"a6d10e0e-6088-4be2-90a6-5ea568d7ce25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819359 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgrn\" (UniqueName: \"kubernetes.io/projected/e1034942-eeca-4ab3-a189-32674858ffac-kube-api-access-dsgrn\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819378 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-signing-key\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0006bec3-f6dc-4496-aca4-3c330d0db8ab-proxy-tls\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819447 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1034942-eeca-4ab3-a189-32674858ffac-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.819691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-config\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.820301 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cf88c98-4151-445d-918e-8b31e853f3f8-service-ca-bundle\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.820364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.820366 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-client\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.820786 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-node-bootstrap-token\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.821348 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-default-certificate\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.821989 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-plugins-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.822055 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-webhook-cert\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.822179 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-csi-data-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.823171 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-stats-auth\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.823268 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-config\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.823813 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7afe030c-130b-4547-b2b2-bdeb076b3d51-etcd-service-ca\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.823864 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495ad454-0421-4d3a-9488-8923702281c2-trusted-ca\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.823910 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cca0b1-26dd-4ae6-a8df-921ad61f7732-config\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.824161 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-config-volume\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.824407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-tmpfs\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.824470 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-signing-cabundle\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.824695 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1034942-eeca-4ab3-a189-32674858ffac-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.825559 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-apiservice-cert\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.826156 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-config\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.826207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-mountpoint-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.826284 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-certs\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.826379 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a06b895d-be38-4663-b92c-172f8a2bbe9d-registration-dir\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.826929 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-signing-key\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.827175 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d32fb0f0-4200-401b-803e-a52704008663-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6w2wm\" (UID: \"d32fb0f0-4200-401b-803e-a52704008663\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.827260 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-srv-cert\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.827288 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.327266363 +0000 UTC m=+152.920281697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.827461 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/466f6a49-c784-4e32-bc06-fc31fe8bdac4-profile-collector-cert\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.828567 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/466f6a49-c784-4e32-bc06-fc31fe8bdac4-srv-cert\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.829348 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-client\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.830009 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.831210 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d46c8684-5e51-4f95-8a90-68e76d701a6a-metrics-tls\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.831521 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-secret-volume\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.831907 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.832531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-serving-cert\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.832551 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cf88c98-4151-445d-918e-8b31e853f3f8-metrics-certs\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.832619 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61eb4af7-4b1d-4f7f-a037-f8b48e40fca7-cert\") pod \"ingress-canary-klhdd\" (UID: \"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7\") " pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.832469 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0006bec3-f6dc-4496-aca4-3c330d0db8ab-proxy-tls\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.833328 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afe030c-130b-4547-b2b2-bdeb076b3d51-serving-cert\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.834660 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d10e0e-6088-4be2-90a6-5ea568d7ce25-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-swx86\" (UID: \"a6d10e0e-6088-4be2-90a6-5ea568d7ce25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.834891 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.835230 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/495ad454-0421-4d3a-9488-8923702281c2-metrics-tls\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.836880 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60cca0b1-26dd-4ae6-a8df-921ad61f7732-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.838665 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63c1f2e4-699c-432e-af51-332bb6e33ba0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5p95v\" (UID: \"63c1f2e4-699c-432e-af51-332bb6e33ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.856281 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.872292 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.883079 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.893352 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.902517 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.913401 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.918636 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-config\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.920764 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.921010 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.420966527 +0000 UTC m=+153.013981871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.921586 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:06 crc kubenswrapper[4799]: E0216 12:34:06.922427 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.422417327 +0000 UTC m=+153.015432681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.933259 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.941617 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-encryption-config\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.952924 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.961250 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586541-b68e-489b-8c9c-73477d70f4dd-serving-cert\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.972563 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.979387 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef62d5-7675-44bf-a2e9-53093b004126-images\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.992492 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:34:06 crc kubenswrapper[4799]: I0216 12:34:06.999419 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.013038 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.018212 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.022589 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.022782 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.522755052 +0000 UTC m=+153.115770396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.023166 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.023644 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.523632876 +0000 UTC m=+153.116648220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.033672 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.040071 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-njdbl" event={"ID":"de5f2060-f162-4fac-b3ef-2acda638dfb6","Type":"ContainerStarted","Data":"b5e0a22dc8e9f0bab70471343e65c32113a84b3b17288eb01917aca6c55db76d"} Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.040182 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-njdbl" event={"ID":"de5f2060-f162-4fac-b3ef-2acda638dfb6","Type":"ContainerStarted","Data":"a8223eaacb472fcb1be7fcf9afe05d4f47cbf7916ee9244d1d6fb892b7478034"} Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.040320 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.042570 4799 generic.go:334] "Generic (PLEG): container finished" podID="b5cd50be-cad4-4fb3-8732-e870df15eb34" containerID="f6d59eee1b8049147a1b2078242eac905e23ca26132a134710867549e4620055" exitCode=0 Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.042637 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" event={"ID":"b5cd50be-cad4-4fb3-8732-e870df15eb34","Type":"ContainerDied","Data":"f6d59eee1b8049147a1b2078242eac905e23ca26132a134710867549e4620055"} Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.042679 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" event={"ID":"b5cd50be-cad4-4fb3-8732-e870df15eb34","Type":"ContainerStarted","Data":"b32d5da9abfd268efbcaca556f09641ebc3a97eeac3305e7a84433d65b4f5ca1"} Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.042920 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-njdbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.043005 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-njdbl" podUID="de5f2060-f162-4fac-b3ef-2acda638dfb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.054268 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.073937 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.078945 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-audit-policies\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.093247 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.099161 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.113646 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.120577 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ngm\" (UniqueName: \"kubernetes.io/projected/12ef62d5-7675-44bf-a2e9-53093b004126-kube-api-access-57ngm\") pod \"machine-api-operator-5694c8668f-6lds8\" (UID: \"12ef62d5-7675-44bf-a2e9-53093b004126\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.124646 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.124835 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.624806194 +0000 UTC m=+153.217821558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.125105 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.125664 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.625640147 +0000 UTC m=+153.218655571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.133188 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.145384 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq9w8\" (UniqueName: \"kubernetes.io/projected/95586541-b68e-489b-8c9c-73477d70f4dd-kube-api-access-lq9w8\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.153515 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.158740 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95586541-b68e-489b-8c9c-73477d70f4dd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vdgfq\" (UID: \"95586541-b68e-489b-8c9c-73477d70f4dd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.172479 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.197450 4799 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.197601 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles podName:ea2b5f46-58b6-41f8-9985-85d5236568ef nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.197565265 +0000 UTC m=+153.790580609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles") pod "controller-manager-879f6c89f-66brb" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.197657 4799 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.197711 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca podName:6e9b7ea2-185b-443f-8aca-7286501b2a80 nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.197690068 +0000 UTC m=+153.790705422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca") pod "route-controller-manager-6576b87f9c-sx8cs" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80") : failed to sync configmap cache: timed out waiting for the condition Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.198446 4799 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.198517 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert podName:ea2b5f46-58b6-41f8-9985-85d5236568ef nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.19850236 +0000 UTC m=+153.791517734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert") pod "controller-manager-879f6c89f-66brb" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef") : failed to sync secret cache: timed out waiting for the condition Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.201406 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.217551 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.230748 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.231024 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.730999269 +0000 UTC m=+153.324014603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.231641 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.232072 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.233780 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.234063 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.734047183 +0000 UTC m=+153.327062527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.253194 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.268087 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69htr\" (UniqueName: \"kubernetes.io/projected/ea2b5f46-58b6-41f8-9985-85d5236568ef-kube-api-access-69htr\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.270344 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.330322 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfh5\" (UniqueName: \"kubernetes.io/projected/744cf3fb-c9e3-442c-bb38-077980637b60-kube-api-access-9wfh5\") pod \"openshift-config-operator-7777fb866f-twvg6\" (UID: \"744cf3fb-c9e3-442c-bb38-077980637b60\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.330478 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9g7\" (UniqueName: \"kubernetes.io/projected/9ff3de50-8fb5-4734-b830-c401b052662a-kube-api-access-lp9g7\") pod \"authentication-operator-69f744f599-4fmnw\" (UID: \"9ff3de50-8fb5-4734-b830-c401b052662a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.332343 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.333085 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.833062982 +0000 UTC m=+153.426078306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.351046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.377671 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x554j\" (UniqueName: \"kubernetes.io/projected/87ed35fc-dae1-4585-a91a-6ecd9a7f555a-kube-api-access-x554j\") pod \"openshift-apiserver-operator-796bbdcf4f-kscxw\" (UID: \"87ed35fc-dae1-4585-a91a-6ecd9a7f555a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.388760 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrnd\" (UniqueName: \"kubernetes.io/projected/45d1c9c3-e345-4470-8116-8d842f9eb227-kube-api-access-9rrnd\") pod \"machine-approver-56656f9798-jn7wb\" (UID: \"45d1c9c3-e345-4470-8116-8d842f9eb227\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.412640 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.413049 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9fqz\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-kube-api-access-p9fqz\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.430882 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vhq\" (UniqueName: \"kubernetes.io/projected/49b44f5c-8d79-4192-998a-c303333cff67-kube-api-access-m8vhq\") pod \"console-operator-58897d9998-d2xlw\" (UID: \"49b44f5c-8d79-4192-998a-c303333cff67\") " pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.431812 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.435967 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.436677 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:07.936658006 +0000 UTC m=+153.529673340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.456108 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-bound-sa-token\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.477887 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4m9n\" (UniqueName: \"kubernetes.io/projected/ffa4da88-9190-4296-b322-c78c18b1f1ac-kube-api-access-b4m9n\") pod \"openshift-controller-manager-operator-756b6f6bc6-c9fs4\" (UID: \"ffa4da88-9190-4296-b322-c78c18b1f1ac\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.488106 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6lds8"] Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.492463 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgndt\" (UniqueName: \"kubernetes.io/projected/6241a3a8-9b40-468b-b9a2-bc51a9eb0875-kube-api-access-pgndt\") pod \"machine-config-operator-74547568cd-87s27\" (UID: \"6241a3a8-9b40-468b-b9a2-bc51a9eb0875\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.507436 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzq8c\" (UniqueName: \"kubernetes.io/projected/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-kube-api-access-hzq8c\") pod \"console-f9d7485db-kkq5f\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.535906 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gjth\" (UniqueName: \"kubernetes.io/projected/a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a-kube-api-access-8gjth\") pod \"cluster-image-registry-operator-dc59b4c8b-5kj8n\" (UID: \"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.552045 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.552541 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.052494966 +0000 UTC m=+153.645510300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.553076 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.553489 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.053474513 +0000 UTC m=+153.646489847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.557966 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbtpx\" (UniqueName: \"kubernetes.io/projected/825cb96e-cef9-4d1a-952b-5f97b639d1e6-kube-api-access-lbtpx\") pod \"cluster-samples-operator-665b6dd947-59rnx\" (UID: \"825cb96e-cef9-4d1a-952b-5f97b639d1e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.563719 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq"] Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.574344 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bba0e11a-a6fd-4b3c-83c9-890f4b5fac05-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mkq9r\" (UID: \"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.593908 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2w9n\" (UniqueName: \"kubernetes.io/projected/98bb2e4c-5ed3-4d64-b732-e740b80883f5-kube-api-access-g2w9n\") pod \"oauth-openshift-558db77b4-sl8tw\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.608068 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljzzf\" (UniqueName: \"kubernetes.io/projected/7afe030c-130b-4547-b2b2-bdeb076b3d51-kube-api-access-ljzzf\") pod \"etcd-operator-b45778765-n9qrr\" (UID: \"7afe030c-130b-4547-b2b2-bdeb076b3d51\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.633001 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6rnc\" (UniqueName: \"kubernetes.io/projected/61eb4af7-4b1d-4f7f-a037-f8b48e40fca7-kube-api-access-j6rnc\") pod \"ingress-canary-klhdd\" (UID: \"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7\") " pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.642616 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.644006 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-twvg6"] Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.648595 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7dl8\" (UniqueName: \"kubernetes.io/projected/3cf88c98-4151-445d-918e-8b31e853f3f8-kube-api-access-z7dl8\") pod \"router-default-5444994796-nwzhj\" (UID: \"3cf88c98-4151-445d-918e-8b31e853f3f8\") " pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.651531 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" Feb 16 12:34:07 crc kubenswrapper[4799]: W0216 12:34:07.653683 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744cf3fb_c9e3_442c_bb38_077980637b60.slice/crio-705bfc73ec09d954f6a74658579997dd4b38125827c6412dc54765732bcfb728 WatchSource:0}: Error finding container 705bfc73ec09d954f6a74658579997dd4b38125827c6412dc54765732bcfb728: Status 404 returned error can't find the container with id 705bfc73ec09d954f6a74658579997dd4b38125827c6412dc54765732bcfb728 Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.654340 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.654437 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.154412354 +0000 UTC m=+153.747427688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.654885 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.655618 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.155590526 +0000 UTC m=+153.748605860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.663078 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.669555 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd68d\" (UniqueName: \"kubernetes.io/projected/d32fb0f0-4200-401b-803e-a52704008663-kube-api-access-cd68d\") pod \"multus-admission-controller-857f4d67dd-6w2wm\" (UID: \"d32fb0f0-4200-401b-803e-a52704008663\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.675922 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4fmnw"] Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.683156 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.690187 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plm95\" (UniqueName: \"kubernetes.io/projected/63c1f2e4-699c-432e-af51-332bb6e33ba0-kube-api-access-plm95\") pod \"package-server-manager-789f6589d5-5p95v\" (UID: \"63c1f2e4-699c-432e-af51-332bb6e33ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.703108 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-klhdd" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.707534 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60cca0b1-26dd-4ae6-a8df-921ad61f7732-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2df75\" (UID: \"60cca0b1-26dd-4ae6-a8df-921ad61f7732\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.724898 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.728113 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4stpz\" (UniqueName: \"kubernetes.io/projected/0006bec3-f6dc-4496-aca4-3c330d0db8ab-kube-api-access-4stpz\") pod \"machine-config-controller-84d6567774-mfccv\" (UID: \"0006bec3-f6dc-4496-aca4-3c330d0db8ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.739363 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.753644 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.754078 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndf8\" (UniqueName: \"kubernetes.io/projected/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-kube-api-access-cndf8\") pod \"marketplace-operator-79b997595-wrg52\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.756196 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.756822 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.256765565 +0000 UTC m=+153.849780899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.763807 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.769771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82e06a11-bf6b-4596-9bfe-b3b9c9e2e954-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-djmcd\" (UID: \"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.771569 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.790209 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.796268 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.797698 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2rbg\" (UniqueName: \"kubernetes.io/projected/0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb-kube-api-access-r2rbg\") pod \"olm-operator-6b444d44fb-lrtf8\" (UID: \"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.806947 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.813254 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.815647 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwdvm\" (UniqueName: \"kubernetes.io/projected/a06b895d-be38-4663-b92c-172f8a2bbe9d-kube-api-access-wwdvm\") pod \"csi-hostpathplugin-nkghs\" (UID: \"a06b895d-be38-4663-b92c-172f8a2bbe9d\") " pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.830816 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.831311 4799 request.go:700] Waited for 1.011626319s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-server/token Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.835480 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqjd7\" (UniqueName: \"kubernetes.io/projected/d46c8684-5e51-4f95-8a90-68e76d701a6a-kube-api-access-rqjd7\") pod \"dns-default-ndp46\" (UID: \"d46c8684-5e51-4f95-8a90-68e76d701a6a\") " pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.850946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r65w4\" (UniqueName: \"kubernetes.io/projected/38db81b9-e2e3-4a5c-a26b-e02bd66fae07-kube-api-access-r65w4\") pod \"machine-config-server-wx2hx\" (UID: \"38db81b9-e2e3-4a5c-a26b-e02bd66fae07\") " pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.854104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.861664 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.862167 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.362149948 +0000 UTC m=+153.955165282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.883040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgrn\" (UniqueName: \"kubernetes.io/projected/e1034942-eeca-4ab3-a189-32674858ffac-kube-api-access-dsgrn\") pod \"kube-storage-version-migrator-operator-b67b599dd-znqn5\" (UID: \"e1034942-eeca-4ab3-a189-32674858ffac\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.886809 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.894550 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5t5\" (UniqueName: \"kubernetes.io/projected/62c4c2ac-a865-431e-9bce-4e69e7054888-kube-api-access-bf5t5\") pod \"migrator-59844c95c7-cp4k6\" (UID: \"62c4c2ac-a865-431e-9bce-4e69e7054888\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.905938 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.917423 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw"] Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.926543 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.929795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr6z5\" (UniqueName: \"kubernetes.io/projected/466f6a49-c784-4e32-bc06-fc31fe8bdac4-kube-api-access-lr6z5\") pod \"catalog-operator-68c6474976-9kh4g\" (UID: \"466f6a49-c784-4e32-bc06-fc31fe8bdac4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.943709 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.944327 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b496h\" (UniqueName: \"kubernetes.io/projected/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-kube-api-access-b496h\") pod \"collect-profiles-29520750-5sn7l\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.959661 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75lp\" (UniqueName: \"kubernetes.io/projected/823f3cb1-fcc7-4416-b2d2-1a1a4d79e845-kube-api-access-m75lp\") pod \"service-ca-operator-777779d784-79mk5\" (UID: \"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.960468 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.962683 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:07 crc kubenswrapper[4799]: E0216 12:34:07.965371 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.46533298 +0000 UTC m=+154.058348314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.969003 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4"] Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.978369 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt8hn\" (UniqueName: \"kubernetes.io/projected/53b4dc5c-f859-4919-873c-46bf2ce1d4ea-kube-api-access-vt8hn\") pod \"packageserver-d55dfcdfc-smfjj\" (UID: \"53b4dc5c-f859-4919-873c-46bf2ce1d4ea\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.989386 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" Feb 16 12:34:07 crc kubenswrapper[4799]: I0216 12:34:07.993519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6dkm\" (UniqueName: \"kubernetes.io/projected/d6f0cb13-521b-4df1-bf03-f9161042d3d9-kube-api-access-d6dkm\") pod \"dns-operator-744455d44c-cbjpn\" (UID: \"d6f0cb13-521b-4df1-bf03-f9161042d3d9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.008472 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.017396 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6v8z\" (UniqueName: \"kubernetes.io/projected/558a0cf2-bf71-43d4-8f20-aefcfa10cda4-kube-api-access-b6v8z\") pod \"service-ca-9c57cc56f-77v9m\" (UID: \"558a0cf2-bf71-43d4-8f20-aefcfa10cda4\") " pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.024300 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wx2hx" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.028852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495ad454-0421-4d3a-9488-8923702281c2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.044977 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d2xlw"] Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.054483 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" event={"ID":"45d1c9c3-e345-4470-8116-8d842f9eb227","Type":"ContainerStarted","Data":"213fde114cd2a80bde16e6a8a3cc92bcfa2796587f658afa34abc6885714b5e8"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.061562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjlsk\" (UniqueName: \"kubernetes.io/projected/a6d10e0e-6088-4be2-90a6-5ea568d7ce25-kube-api-access-vjlsk\") pod \"control-plane-machine-set-operator-78cbb6b69f-swx86\" (UID: \"a6d10e0e-6088-4be2-90a6-5ea568d7ce25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.064071 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" event={"ID":"12ef62d5-7675-44bf-a2e9-53093b004126","Type":"ContainerStarted","Data":"fed6922a00f0e0324eb6cd7911f42afa3287ea4ae1468b52757a1149d8058000"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.065468 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" event={"ID":"12ef62d5-7675-44bf-a2e9-53093b004126","Type":"ContainerStarted","Data":"51b18a6876a7d5e2fd80b9a0ee95fa56f17703191d95c519d6fc0d115e077efa"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.065506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" event={"ID":"12ef62d5-7675-44bf-a2e9-53093b004126","Type":"ContainerStarted","Data":"700fefc0a26889085258ca6f18e40cf96941dc5b02267c63d8ff4c8835e423ce"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.067500 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.067883 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.567868076 +0000 UTC m=+154.160883410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.073874 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7dx7\" (UniqueName: \"kubernetes.io/projected/495ad454-0421-4d3a-9488-8923702281c2-kube-api-access-h7dx7\") pod \"ingress-operator-5b745b69d9-wn4mc\" (UID: \"495ad454-0421-4d3a-9488-8923702281c2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:08 crc kubenswrapper[4799]: W0216 12:34:08.077488 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b44f5c_8d79_4192_998a_c303333cff67.slice/crio-71bda1f97aeb83b58f66d443ed640b49aa189503875a504df4b393b7cb93d241 WatchSource:0}: Error finding container 71bda1f97aeb83b58f66d443ed640b49aa189503875a504df4b393b7cb93d241: Status 404 returned error can't find the container with id 71bda1f97aeb83b58f66d443ed640b49aa189503875a504df4b393b7cb93d241 Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.081235 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" event={"ID":"95586541-b68e-489b-8c9c-73477d70f4dd","Type":"ContainerStarted","Data":"bf319ef9aaea002f1e6e91a3b3535be5c00b78b86f002727bb0b50b219ed9ebb"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.081308 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" event={"ID":"95586541-b68e-489b-8c9c-73477d70f4dd","Type":"ContainerStarted","Data":"53e0d6a0491cab6e7bfb4d2b3b548ed3059bb4c48ee36cf690405f6a9e64b758"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.091427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" event={"ID":"9ff3de50-8fb5-4734-b830-c401b052662a","Type":"ContainerStarted","Data":"b6c35edfbce934f28e7438c4723b7f684710b550f694fa562fa659a932734aa4"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.091504 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" event={"ID":"9ff3de50-8fb5-4734-b830-c401b052662a","Type":"ContainerStarted","Data":"5ed1439d5cebbff82bcb5149bc8c527ea79409d6aaefa021b0b31951cc26be26"} Feb 16 12:34:08 crc kubenswrapper[4799]: W0216 12:34:08.093216 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffa4da88_9190_4296_b322_c78c18b1f1ac.slice/crio-17488548b41a13d50e19ed53a8d3ae93026482a533eec5777274337b5182d234 WatchSource:0}: Error finding container 17488548b41a13d50e19ed53a8d3ae93026482a533eec5777274337b5182d234: Status 404 returned error can't find the container with id 17488548b41a13d50e19ed53a8d3ae93026482a533eec5777274337b5182d234 Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.094276 4799 generic.go:334] "Generic (PLEG): container finished" podID="744cf3fb-c9e3-442c-bb38-077980637b60" containerID="020f144aabc774263636e413b28eb76154c717f29de18adc43037bb7ec18eaa7" exitCode=0 Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.094318 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" event={"ID":"744cf3fb-c9e3-442c-bb38-077980637b60","Type":"ContainerDied","Data":"020f144aabc774263636e413b28eb76154c717f29de18adc43037bb7ec18eaa7"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.094382 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" event={"ID":"744cf3fb-c9e3-442c-bb38-077980637b60","Type":"ContainerStarted","Data":"705bfc73ec09d954f6a74658579997dd4b38125827c6412dc54765732bcfb728"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.112769 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" event={"ID":"b5cd50be-cad4-4fb3-8732-e870df15eb34","Type":"ContainerStarted","Data":"59d5b8f7fa1f21f45204d16776497d1ab5c492d26d39b7833aa0e3a455c59f54"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.113335 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" event={"ID":"b5cd50be-cad4-4fb3-8732-e870df15eb34","Type":"ContainerStarted","Data":"cb0abe804908e531e656000c2bbed361d10a9441f895551af2ea160cabb93be7"} Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.119794 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-njdbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.122079 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.122100 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-njdbl" podUID="de5f2060-f162-4fac-b3ef-2acda638dfb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.126071 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx"] Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.141944 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.165411 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.169309 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.177413 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.677343911 +0000 UTC m=+154.270359245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.177981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.178511 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.179395 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.679374876 +0000 UTC m=+154.272390210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.183098 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-klhdd"] Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.198323 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.215335 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.233072 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.251518 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.279018 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.279514 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.279596 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.279712 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.282012 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.287339 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.287504 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.787480214 +0000 UTC m=+154.380495548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.289562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") pod \"route-controller-manager-6576b87f9c-sx8cs\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.316768 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") pod \"controller-manager-879f6c89f-66brb\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.381076 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.381475 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.881453375 +0000 UTC m=+154.474468709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: W0216 12:34:08.424253 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cf88c98_4151_445d_918e_8b31e853f3f8.slice/crio-97e56d70ce1ccabe957ec48991b658286163aa5b30fd88af85e5d9172bf5b6fb WatchSource:0}: Error finding container 97e56d70ce1ccabe957ec48991b658286163aa5b30fd88af85e5d9172bf5b6fb: Status 404 returned error can't find the container with id 97e56d70ce1ccabe957ec48991b658286163aa5b30fd88af85e5d9172bf5b6fb Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.482962 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.483153 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.983100487 +0000 UTC m=+154.576115821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.483412 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.483858 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:08.983839577 +0000 UTC m=+154.576854911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.487622 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.531741 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.584627 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.585101 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.085084477 +0000 UTC m=+154.678099811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.614743 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" podStartSLOduration=132.614727168 podStartE2EDuration="2m12.614727168s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:08.613320299 +0000 UTC m=+154.206335633" watchObservedRunningTime="2026-02-16 12:34:08.614727168 +0000 UTC m=+154.207742502" Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.687206 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.687974 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.187921321 +0000 UTC m=+154.780936675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.788023 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.788276 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.288234415 +0000 UTC m=+154.881249749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.889231 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.889622 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.389600479 +0000 UTC m=+154.982615803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:08 crc kubenswrapper[4799]: I0216 12:34:08.994478 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:08 crc kubenswrapper[4799]: E0216 12:34:08.995643 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.495615799 +0000 UTC m=+155.088631123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.056553 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r"] Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.106881 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.107228 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.607217173 +0000 UTC m=+155.200232507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.217142 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.217840 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.717819259 +0000 UTC m=+155.310834593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.225689 4799 generic.go:334] "Generic (PLEG): container finished" podID="95586541-b68e-489b-8c9c-73477d70f4dd" containerID="bf319ef9aaea002f1e6e91a3b3535be5c00b78b86f002727bb0b50b219ed9ebb" exitCode=0 Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.255334 4799 patch_prober.go:28] interesting pod/console-operator-58897d9998-d2xlw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.255391 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" podUID="49b44f5c-8d79-4192-998a-c303333cff67" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.319490 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.322800 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.822786651 +0000 UTC m=+155.415801985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339008 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" event={"ID":"45d1c9c3-e345-4470-8116-8d842f9eb227","Type":"ContainerStarted","Data":"da6ae60a3733fec67652c6cb978ab9b3a1f0639efa4fa350ebccef4a43867c2e"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339063 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wx2hx" event={"ID":"38db81b9-e2e3-4a5c-a26b-e02bd66fae07","Type":"ContainerStarted","Data":"305081d2265c9ed2c5d9c7b80e24405da9b47ee03b1ac1ebc3a187492a829724"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" event={"ID":"87ed35fc-dae1-4585-a91a-6ecd9a7f555a","Type":"ContainerStarted","Data":"f2bf6b503f880487f139090235b4745f9aa4092f654c72a5ee37bd0d32331912"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339084 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" event={"ID":"87ed35fc-dae1-4585-a91a-6ecd9a7f555a","Type":"ContainerStarted","Data":"292da3584add8d080088243cadb8b6dbd7dcafedd168ef7417140d1bd88cfeea"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339095 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nwzhj" event={"ID":"3cf88c98-4151-445d-918e-8b31e853f3f8","Type":"ContainerStarted","Data":"97e56d70ce1ccabe957ec48991b658286163aa5b30fd88af85e5d9172bf5b6fb"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339138 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339157 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339169 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" event={"ID":"95586541-b68e-489b-8c9c-73477d70f4dd","Type":"ContainerDied","Data":"bf319ef9aaea002f1e6e91a3b3535be5c00b78b86f002727bb0b50b219ed9ebb"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339201 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" event={"ID":"95586541-b68e-489b-8c9c-73477d70f4dd","Type":"ContainerStarted","Data":"e42c0f98bdf3af5e7057a789dcf68bcec4ae29d6790244243f63fd2074bd2fd7"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339269 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" event={"ID":"49b44f5c-8d79-4192-998a-c303333cff67","Type":"ContainerStarted","Data":"71bda1f97aeb83b58f66d443ed640b49aa189503875a504df4b393b7cb93d241"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339312 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" event={"ID":"ffa4da88-9190-4296-b322-c78c18b1f1ac","Type":"ContainerStarted","Data":"36ce6ec33fe7dccfe28392e422e2533b8293913bb1c3332b6f6b36651deca6f4"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339328 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" event={"ID":"ffa4da88-9190-4296-b322-c78c18b1f1ac","Type":"ContainerStarted","Data":"17488548b41a13d50e19ed53a8d3ae93026482a533eec5777274337b5182d234"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339340 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" event={"ID":"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05","Type":"ContainerStarted","Data":"f2a4564e94d08768170c703a1261811d632384c809bb5fac2143c3cbadd99ed3"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339354 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-klhdd" event={"ID":"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7","Type":"ContainerStarted","Data":"121b4c734604843122cffdf5d4a58d6feaac7da4bcfaaa3204f193a617c61dd6"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339369 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-klhdd" event={"ID":"61eb4af7-4b1d-4f7f-a037-f8b48e40fca7","Type":"ContainerStarted","Data":"ef8a68ccf5aaae1805a5c72a08c629f96fb29d7cd968343f2837aa513ed144dc"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339381 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" event={"ID":"744cf3fb-c9e3-442c-bb38-077980637b60","Type":"ContainerStarted","Data":"c759069e94db43b68c320b96799699efda976cd4ac6980c3353026ba4d4c9646"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.339394 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" event={"ID":"825cb96e-cef9-4d1a-952b-5f97b639d1e6","Type":"ContainerStarted","Data":"8ae882617104394f8c274a27c2839d964760b37c99cb6e6625a7ee6b45d26ecc"} Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.407477 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4fmnw" podStartSLOduration=133.407446447 podStartE2EDuration="2m13.407446447s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:09.405666488 +0000 UTC m=+154.998681812" watchObservedRunningTime="2026-02-16 12:34:09.407446447 +0000 UTC m=+155.000461781" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.421425 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.422836 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:09.922812908 +0000 UTC m=+155.515828242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.523966 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.524445 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.024425318 +0000 UTC m=+155.617440652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.625967 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.626401 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.126356377 +0000 UTC m=+155.719371711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.626626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.627073 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.127057196 +0000 UTC m=+155.720072530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.728283 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.728479 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.22846165 +0000 UTC m=+155.821476974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.728271 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lds8" podStartSLOduration=133.728250575 podStartE2EDuration="2m13.728250575s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:09.727587896 +0000 UTC m=+155.320603230" watchObservedRunningTime="2026-02-16 12:34:09.728250575 +0000 UTC m=+155.321265909" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.728589 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.750200 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.250172774 +0000 UTC m=+155.843188108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.797517 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-njdbl" podStartSLOduration=133.797490729 podStartE2EDuration="2m13.797490729s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:09.776288269 +0000 UTC m=+155.369303593" watchObservedRunningTime="2026-02-16 12:34:09.797490729 +0000 UTC m=+155.390506063" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.829577 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.830185 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.330163003 +0000 UTC m=+155.923178337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.925806 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" podStartSLOduration=133.925779959 podStartE2EDuration="2m13.925779959s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:09.924234167 +0000 UTC m=+155.517249501" watchObservedRunningTime="2026-02-16 12:34:09.925779959 +0000 UTC m=+155.518795293" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.927048 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-klhdd" podStartSLOduration=5.927039873 podStartE2EDuration="5.927039873s" podCreationTimestamp="2026-02-16 12:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:09.893596878 +0000 UTC m=+155.486612212" watchObservedRunningTime="2026-02-16 12:34:09.927039873 +0000 UTC m=+155.520055207" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.931692 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:09 crc kubenswrapper[4799]: E0216 12:34:09.932287 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.432273697 +0000 UTC m=+156.025289021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.972851 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" podStartSLOduration=132.972823306 podStartE2EDuration="2m12.972823306s" podCreationTimestamp="2026-02-16 12:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:09.971985493 +0000 UTC m=+155.565000827" watchObservedRunningTime="2026-02-16 12:34:09.972823306 +0000 UTC m=+155.565838640" Feb 16 12:34:09 crc kubenswrapper[4799]: I0216 12:34:09.987224 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv"] Feb 16 12:34:10 crc kubenswrapper[4799]: W0216 12:34:10.002976 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0006bec3_f6dc_4496_aca4_3c330d0db8ab.slice/crio-8899a726272816467cda4aa9789adc595b026354fbd62ea44b736db95a7105c4 WatchSource:0}: Error finding container 8899a726272816467cda4aa9789adc595b026354fbd62ea44b736db95a7105c4: Status 404 returned error can't find the container with id 8899a726272816467cda4aa9789adc595b026354fbd62ea44b736db95a7105c4 Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.027425 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" podStartSLOduration=134.027397919 podStartE2EDuration="2m14.027397919s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.026176356 +0000 UTC m=+155.619191690" watchObservedRunningTime="2026-02-16 12:34:10.027397919 +0000 UTC m=+155.620413243" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.035992 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.036404 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.536380095 +0000 UTC m=+156.129395419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.036604 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl8tw"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.045181 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.082981 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6w2wm"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.086940 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.092186 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wrg52"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.093481 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-c9fs4" podStartSLOduration=134.093454697 podStartE2EDuration="2m14.093454697s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.067053964 +0000 UTC m=+155.660069298" watchObservedRunningTime="2026-02-16 12:34:10.093454697 +0000 UTC m=+155.686470031" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.107743 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kscxw" podStartSLOduration=134.107723927 podStartE2EDuration="2m14.107723927s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.105688501 +0000 UTC m=+155.698703835" watchObservedRunningTime="2026-02-16 12:34:10.107723927 +0000 UTC m=+155.700739251" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.137477 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kkq5f"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.137652 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.137987 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.637971095 +0000 UTC m=+156.230986429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.169532 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.191298 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.193076 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ndp46"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.197766 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-87s27"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.204003 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n9qrr"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.206519 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.239229 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.239571 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.739551464 +0000 UTC m=+156.332566798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.239696 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.240020 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.740010536 +0000 UTC m=+156.333025870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: W0216 12:34:10.263513 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d10e0e_6088_4be2_90a6_5ea568d7ce25.slice/crio-4a37b4ba42d4cdc81cd5c92cf51fca24cec9e4210d3d9dec1cd7e0576fe81f21 WatchSource:0}: Error finding container 4a37b4ba42d4cdc81cd5c92cf51fca24cec9e4210d3d9dec1cd7e0576fe81f21: Status 404 returned error can't find the container with id 4a37b4ba42d4cdc81cd5c92cf51fca24cec9e4210d3d9dec1cd7e0576fe81f21 Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.340894 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.341750 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.84173536 +0000 UTC m=+156.434750684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.351766 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" event={"ID":"0006bec3-f6dc-4496-aca4-3c330d0db8ab","Type":"ContainerStarted","Data":"c1ba5f8fa44120d26df4791dd8f870a3aab251d4940ca5d6a773eba9aaad59ff"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.351869 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" event={"ID":"0006bec3-f6dc-4496-aca4-3c330d0db8ab","Type":"ContainerStarted","Data":"8899a726272816467cda4aa9789adc595b026354fbd62ea44b736db95a7105c4"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.371361 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" event={"ID":"7afe030c-130b-4547-b2b2-bdeb076b3d51","Type":"ContainerStarted","Data":"48571a29884d867a8e9b78a4f2cb9ec47e4a80ed7ac5e83bb010a46ebc982833"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.381514 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.381951 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wx2hx" event={"ID":"38db81b9-e2e3-4a5c-a26b-e02bd66fae07","Type":"ContainerStarted","Data":"56fd61eda27cdfa5ac619575f9518d06c5b843d283cded66ad283ff52c84b951"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.383818 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" event={"ID":"98bb2e4c-5ed3-4d64-b732-e740b80883f5","Type":"ContainerStarted","Data":"0095740b5e3aaa73b87ec902763aece31f05c80b2aecc19d8b680dba407cc1dd"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.385499 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" event={"ID":"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954","Type":"ContainerStarted","Data":"e711c1aec8a809662817b2225c6927008d226a6466bbed75bfc00f53d16188d6"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.393570 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.396439 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" event={"ID":"466f6a49-c784-4e32-bc06-fc31fe8bdac4","Type":"ContainerStarted","Data":"5023c2fb4498dcfa23d767d368b618d538a9740f935c4b6878619cf8d377175e"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.404993 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.407817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" event={"ID":"d32fb0f0-4200-401b-803e-a52704008663","Type":"ContainerStarted","Data":"7f6b5b57f4d56e74ff9b94308d0247fb0d7e53a93e30014d41391bb242870f75"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.409111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" event={"ID":"6241a3a8-9b40-468b-b9a2-bc51a9eb0875","Type":"ContainerStarted","Data":"754e3d878dc9ed463267b3b2e6641e304cfccf0efdbe7667948253a24c650f69"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.413184 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wx2hx" podStartSLOduration=6.413167404 podStartE2EDuration="6.413167404s" podCreationTimestamp="2026-02-16 12:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.40972629 +0000 UTC m=+156.002741624" watchObservedRunningTime="2026-02-16 12:34:10.413167404 +0000 UTC m=+156.006182738" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.414153 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkq5f" event={"ID":"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9","Type":"ContainerStarted","Data":"be775679dafb19c7978db06ad371464b255f7d69d4cc765785bcd12b519734f6"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.446370 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.447466 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nkghs"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.447534 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.448052 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:10.948032088 +0000 UTC m=+156.541047422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.451710 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.452926 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" event={"ID":"49b44f5c-8d79-4192-998a-c303333cff67","Type":"ContainerStarted","Data":"0277be30f1d490ee613c5f9f0939d18bb3e29e679d03d2f8fdf017f96efe204b"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.453693 4799 patch_prober.go:28] interesting pod/console-operator-58897d9998-d2xlw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.453734 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" podUID="49b44f5c-8d79-4192-998a-c303333cff67" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.460420 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" event={"ID":"63c1f2e4-699c-432e-af51-332bb6e33ba0","Type":"ContainerStarted","Data":"cfaaea50d75b7421a82bd315e060ed190a66e5abc5d5abbe3ad4d79a6d9788e0"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.471588 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nwzhj" event={"ID":"3cf88c98-4151-445d-918e-8b31e853f3f8","Type":"ContainerStarted","Data":"e1b89adfbf8336922cd596c38e46cd1aa8b346c6a3ebad9f5ce4acc286f54301"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.484262 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.489742 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-77v9m"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.499978 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ndp46" event={"ID":"d46c8684-5e51-4f95-8a90-68e76d701a6a","Type":"ContainerStarted","Data":"50ce435af0f857e40aa55fa8290977c7b5f104f04c447234731248072953241c"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.502332 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6"] Feb 16 12:34:10 crc kubenswrapper[4799]: W0216 12:34:10.515218 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b4dc5c_f859_4919_873c_46bf2ce1d4ea.slice/crio-dad411419b44e60c8d07c6bf099035d08e1d708d6bdd1d1b6562f8a58c779b33 WatchSource:0}: Error finding container dad411419b44e60c8d07c6bf099035d08e1d708d6bdd1d1b6562f8a58c779b33: Status 404 returned error can't find the container with id dad411419b44e60c8d07c6bf099035d08e1d708d6bdd1d1b6562f8a58c779b33 Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.516611 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.534254 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nwzhj" podStartSLOduration=134.534215446 podStartE2EDuration="2m14.534215446s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.519946766 +0000 UTC m=+156.112962100" watchObservedRunningTime="2026-02-16 12:34:10.534215446 +0000 UTC m=+156.127230780" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.537426 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cbjpn"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.537475 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-79mk5"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.549197 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.551203 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.05116995 +0000 UTC m=+156.644185284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.557738 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-66brb"] Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.594081 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" event={"ID":"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a","Type":"ContainerStarted","Data":"6e75c2d945b8d333dc573a56e6ba7ec220df888f2abb9cc235addfc3487fd139"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.597878 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" event={"ID":"ffbd79e8-b486-40f6-bc8a-94a92f32a71e","Type":"ContainerStarted","Data":"af1fd1977c4283863249b9f448c648f71a1ca424f81ba2e926a6583cdbc8e6cc"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.615961 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" event={"ID":"825cb96e-cef9-4d1a-952b-5f97b639d1e6","Type":"ContainerStarted","Data":"42e3055c28864e759cde8136371a4ece151fcadb08b1044af1ad92f76d58d40a"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.616038 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" event={"ID":"825cb96e-cef9-4d1a-952b-5f97b639d1e6","Type":"ContainerStarted","Data":"a791a36158e125bb88441abd85f7d0475f00eaedf5d124dba2e6f4e63adc78b3"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.624178 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" event={"ID":"45d1c9c3-e345-4470-8116-8d842f9eb227","Type":"ContainerStarted","Data":"74550d0981fcd0515537ed1d7896ebb3f7b3e9cbb839f1455bc6ac84eb983b5d"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.652089 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.653627 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.153605643 +0000 UTC m=+156.746620977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.654723 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" event={"ID":"a6d10e0e-6088-4be2-90a6-5ea568d7ce25","Type":"ContainerStarted","Data":"4a37b4ba42d4cdc81cd5c92cf51fca24cec9e4210d3d9dec1cd7e0576fe81f21"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.667485 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-59rnx" podStartSLOduration=134.667459682 podStartE2EDuration="2m14.667459682s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.639829636 +0000 UTC m=+156.232844960" watchObservedRunningTime="2026-02-16 12:34:10.667459682 +0000 UTC m=+156.260475006" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.668474 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jn7wb" podStartSLOduration=134.668464459 podStartE2EDuration="2m14.668464459s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.66702379 +0000 UTC m=+156.260039124" watchObservedRunningTime="2026-02-16 12:34:10.668464459 +0000 UTC m=+156.261479793" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.675388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" event={"ID":"bba0e11a-a6fd-4b3c-83c9-890f4b5fac05","Type":"ContainerStarted","Data":"f08e43082a8e3960235aaf1709708cab32ab958fef6a57de6295e897c042988b"} Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.700370 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mkq9r" podStartSLOduration=134.700351412 podStartE2EDuration="2m14.700351412s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:10.700064994 +0000 UTC m=+156.293080328" watchObservedRunningTime="2026-02-16 12:34:10.700351412 +0000 UTC m=+156.293366746" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.757574 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.758989 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.258965245 +0000 UTC m=+156.851980579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.854317 4799 csr.go:261] certificate signing request csr-d8fn2 is approved, waiting to be issued Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.859149 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.862105 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.362091157 +0000 UTC m=+156.955106491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.863303 4799 csr.go:257] certificate signing request csr-d8fn2 is issued Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.952428 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.959372 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:10 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:10 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:10 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.959468 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:10 crc kubenswrapper[4799]: I0216 12:34:10.960010 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:10 crc kubenswrapper[4799]: E0216 12:34:10.960470 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.460455338 +0000 UTC m=+157.053470672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.061619 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.062057 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.562041208 +0000 UTC m=+157.155056542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.163000 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.163551 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.663532915 +0000 UTC m=+157.256548249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.264927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.265389 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.765368011 +0000 UTC m=+157.358383335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.313639 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.315187 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.345502 4799 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gm29d container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]log ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]etcd ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/max-in-flight-filter ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 12:34:11 crc kubenswrapper[4799]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/openshift.io-startinformers ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 12:34:11 crc kubenswrapper[4799]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 12:34:11 crc kubenswrapper[4799]: livez check failed Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.345560 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" podUID="b5cd50be-cad4-4fb3-8732-e870df15eb34" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.366932 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.367758 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.867738942 +0000 UTC m=+157.460754276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.471432 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.473951 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:11.973929417 +0000 UTC m=+157.566944751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.572315 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.573898 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.073879911 +0000 UTC m=+157.666895245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.675596 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.676040 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.176024545 +0000 UTC m=+157.769039879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.761603 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" event={"ID":"6e9b7ea2-185b-443f-8aca-7286501b2a80","Type":"ContainerStarted","Data":"bc81dd9dd523a0af1ccc6b32185e09b5aad807d6759a5bf13e96c3bf5ddceaee"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.780194 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.780803 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.280783552 +0000 UTC m=+157.873798886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.790687 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" event={"ID":"ea2b5f46-58b6-41f8-9985-85d5236568ef","Type":"ContainerStarted","Data":"dd30f27eb757cd5260b0ac03e6fa0046041a656fec6c68155f37987b7202ba84"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.842697 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.852944 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" event={"ID":"63c1f2e4-699c-432e-af51-332bb6e33ba0","Type":"ContainerStarted","Data":"13ffc30c2f73cc68dc1aec0907c5fe4590cd6d5419beafae21cd59e3bc74ab13"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.862246 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" event={"ID":"466f6a49-c784-4e32-bc06-fc31fe8bdac4","Type":"ContainerStarted","Data":"62d4bbd702ee4eb9b8b198c719afd04d66242915ddcc8d3b9965dc46c4475494"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.862478 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.866207 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 12:29:10 +0000 UTC, rotation deadline is 2026-12-02 11:27:44.178278242 +0000 UTC Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.866237 4799 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9kh4g container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.866249 4799 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6934h53m32.312031842s for next certificate rotation Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.866308 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" podUID="466f6a49-c784-4e32-bc06-fc31fe8bdac4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.882419 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:11 crc kubenswrapper[4799]: E0216 12:34:11.882852 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.382837474 +0000 UTC m=+157.975852808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.891035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" event={"ID":"6241a3a8-9b40-468b-b9a2-bc51a9eb0875","Type":"ContainerStarted","Data":"796f8287309ef9b8030b861d0611859eea1b8c67aa99771426b7048026a496e4"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.899249 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" podStartSLOduration=135.899230183 podStartE2EDuration="2m15.899230183s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:11.897174996 +0000 UTC m=+157.490190350" watchObservedRunningTime="2026-02-16 12:34:11.899230183 +0000 UTC m=+157.492245517" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.909398 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" event={"ID":"ffbd79e8-b486-40f6-bc8a-94a92f32a71e","Type":"ContainerStarted","Data":"6ea1c32423ae94cb1936bb4a541e60f2b4ca6f6b792b5af8b1b01b2a731a08df"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.910182 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.917627 4799 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wrg52 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.917707 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.935015 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" event={"ID":"82e06a11-bf6b-4596-9bfe-b3b9c9e2e954","Type":"ContainerStarted","Data":"54af82781778e6a77c46548fa1e8fa5ff019535461d05a381adfb9cb5b77c371"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.942053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" event={"ID":"a06b895d-be38-4663-b92c-172f8a2bbe9d","Type":"ContainerStarted","Data":"d1c1f13dad77734a3d7e8d198bf503eecb3a8f3790a9defb75f7902e07f164d5"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.951940 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:11 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:11 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:11 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.952012 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.961663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" event={"ID":"a6d10e0e-6088-4be2-90a6-5ea568d7ce25","Type":"ContainerStarted","Data":"6a2e5408c1da0718ab9f687d3ac8f99c5617ab719f643fd97cb69dd0142e7f0f"} Feb 16 12:34:11 crc kubenswrapper[4799]: I0216 12:34:11.970955 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" podStartSLOduration=134.970914864 podStartE2EDuration="2m14.970914864s" podCreationTimestamp="2026-02-16 12:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:11.934614171 +0000 UTC m=+157.527629495" watchObservedRunningTime="2026-02-16 12:34:11.970914864 +0000 UTC m=+157.563930198" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.037302 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" event={"ID":"53b4dc5c-f859-4919-873c-46bf2ce1d4ea","Type":"ContainerStarted","Data":"dad411419b44e60c8d07c6bf099035d08e1d708d6bdd1d1b6562f8a58c779b33"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.038517 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.041400 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.541381852 +0000 UTC m=+158.134397186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.059855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" event={"ID":"d32fb0f0-4200-401b-803e-a52704008663","Type":"ContainerStarted","Data":"783e781efdff805b5a1b746440c6870ec865c78583af80d7289c4357d34389d2"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.080321 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-djmcd" podStartSLOduration=136.080298037 podStartE2EDuration="2m16.080298037s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.036817927 +0000 UTC m=+157.629833261" watchObservedRunningTime="2026-02-16 12:34:12.080298037 +0000 UTC m=+157.673313371" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.082186 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-swx86" podStartSLOduration=136.082178398 podStartE2EDuration="2m16.082178398s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.075730152 +0000 UTC m=+157.668745486" watchObservedRunningTime="2026-02-16 12:34:12.082178398 +0000 UTC m=+157.675193732" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.096568 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" event={"ID":"e1034942-eeca-4ab3-a189-32674858ffac","Type":"ContainerStarted","Data":"77ead1c098507afbc265143c042d2c21a669daaa2124f43c29a9ebd9ddf05e8c"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.109189 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" event={"ID":"62c4c2ac-a865-431e-9bce-4e69e7054888","Type":"ContainerStarted","Data":"850d24aac3c831ebe5ca1fe7769b5396a4f6b1913f0a0dfa7e6000de303d8993"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.114582 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" event={"ID":"d6f0cb13-521b-4df1-bf03-f9161042d3d9","Type":"ContainerStarted","Data":"e005b4b0d0b3091f09b0fdbbdc2cf07f930c6befdbb339924a34670b1910b0a0"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.117165 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" event={"ID":"e6ab08e0-f4bc-4dcc-abaf-876b063165ad","Type":"ContainerStarted","Data":"aaa7a0ce9bbd09bbe65107188212b2ff4c9b1f30ecbce2fafc52dbfbbfd09d09"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.117208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" event={"ID":"e6ab08e0-f4bc-4dcc-abaf-876b063165ad","Type":"ContainerStarted","Data":"4aa774df418abc094c085268c259f2a1e40ac21e2a4d183eae779a3a199984af"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.119746 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkq5f" event={"ID":"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9","Type":"ContainerStarted","Data":"35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.121889 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ndp46" event={"ID":"d46c8684-5e51-4f95-8a90-68e76d701a6a","Type":"ContainerStarted","Data":"952e6e34e8bf0f94975eb90c91105a1d591498582a8c8b5eca1973d4ef0a46a6"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.123777 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" event={"ID":"98bb2e4c-5ed3-4d64-b732-e740b80883f5","Type":"ContainerStarted","Data":"4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.124545 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.126637 4799 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sl8tw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.126771 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.127612 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" event={"ID":"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb","Type":"ContainerStarted","Data":"2bdf5e19d6c01cfe2676c9599d1df0c4c6fc4968dcdfb2140a86eda1a5fc0415"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.127665 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" event={"ID":"0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb","Type":"ContainerStarted","Data":"6d7d0d13e10218fa39fb34204ba1bd60c7e024fa86c4c5587ba61fd2bc8ae335"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.127848 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.136089 4799 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lrtf8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.136186 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" podUID="0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.138850 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" podStartSLOduration=136.138829218 podStartE2EDuration="2m16.138829218s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.135397154 +0000 UTC m=+157.728412488" watchObservedRunningTime="2026-02-16 12:34:12.138829218 +0000 UTC m=+157.731844552" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.140723 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.141316 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.641295466 +0000 UTC m=+158.234310800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.152416 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" event={"ID":"a63bc9e6-447c-4aa2-9ef5-f3718c2f0f6a","Type":"ContainerStarted","Data":"34f4a8a78bb2d479b44b66dad67a66596429dd3177cf63fb409cbd8f877ddc36"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.156551 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" podStartSLOduration=136.156530943 podStartE2EDuration="2m16.156530943s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.154712003 +0000 UTC m=+157.747727337" watchObservedRunningTime="2026-02-16 12:34:12.156530943 +0000 UTC m=+157.749546277" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.164552 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" event={"ID":"558a0cf2-bf71-43d4-8f20-aefcfa10cda4","Type":"ContainerStarted","Data":"07a8c743d565c1018bd48e45ccb2591937ea4c7e34c61ab4c42d1ddb9d623761"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.173527 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" event={"ID":"7afe030c-130b-4547-b2b2-bdeb076b3d51","Type":"ContainerStarted","Data":"00b91a4d48781b0f718d17d539b539a330479f875281421910c5a186820d0223"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.178931 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" event={"ID":"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845","Type":"ContainerStarted","Data":"efff340af2ebf4cf9d0364844cfb1bddfaff2c72366c4f66651a36d19ef2b4ea"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.179055 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" event={"ID":"823f3cb1-fcc7-4416-b2d2-1a1a4d79e845","Type":"ContainerStarted","Data":"7a1db57bfeb1265c1410664787e2599128f9ddf067757e7bd7d57a06e0feb67a"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.188400 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" event={"ID":"60cca0b1-26dd-4ae6-a8df-921ad61f7732","Type":"ContainerStarted","Data":"1b63fd07419010526bc7d209f9c67be810b891cac729a085971413cbca13d104"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.193730 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" event={"ID":"0006bec3-f6dc-4496-aca4-3c330d0db8ab","Type":"ContainerStarted","Data":"29bf618cdb1fd9e7330464b20b87d4d79519480df2508b3cb7dc71c01ae44a38"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.197521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" event={"ID":"495ad454-0421-4d3a-9488-8923702281c2","Type":"ContainerStarted","Data":"f619068dbb7f2e85cf116cff8a21be9652ba12fe502621f940e0743b85a3729a"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.197575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" event={"ID":"495ad454-0421-4d3a-9488-8923702281c2","Type":"ContainerStarted","Data":"ef3785ee378c59c23b3d184273c4b16ac6e547a1f6bcacf92dfd4b6b3afd7732"} Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.203544 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" podStartSLOduration=136.203526918 podStartE2EDuration="2m16.203526918s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.183598383 +0000 UTC m=+157.776613717" watchObservedRunningTime="2026-02-16 12:34:12.203526918 +0000 UTC m=+157.796542252" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.205380 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kkq5f" podStartSLOduration=136.205372539 podStartE2EDuration="2m16.205372539s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.203869818 +0000 UTC m=+157.796885152" watchObservedRunningTime="2026-02-16 12:34:12.205372539 +0000 UTC m=+157.798387873" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.208701 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-d2xlw" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.234269 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.234775 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.245419 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.245763 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.745713103 +0000 UTC m=+158.338728437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.245946 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.246559 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.249279 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.749229989 +0000 UTC m=+158.342245323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.281349 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mfccv" podStartSLOduration=136.281140072 podStartE2EDuration="2m16.281140072s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.227456303 +0000 UTC m=+157.820471637" watchObservedRunningTime="2026-02-16 12:34:12.281140072 +0000 UTC m=+157.874155406" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.320079 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-79mk5" podStartSLOduration=135.320051016 podStartE2EDuration="2m15.320051016s" podCreationTimestamp="2026-02-16 12:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.288181515 +0000 UTC m=+157.881196849" watchObservedRunningTime="2026-02-16 12:34:12.320051016 +0000 UTC m=+157.913066350" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.336593 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-n9qrr" podStartSLOduration=136.336566308 podStartE2EDuration="2m16.336566308s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.320791837 +0000 UTC m=+157.913807171" watchObservedRunningTime="2026-02-16 12:34:12.336566308 +0000 UTC m=+157.929581642" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.339093 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5kj8n" podStartSLOduration=136.339083767 podStartE2EDuration="2m16.339083767s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:12.335869009 +0000 UTC m=+157.928884343" watchObservedRunningTime="2026-02-16 12:34:12.339083767 +0000 UTC m=+157.932099121" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.357570 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.358557 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.858528669 +0000 UTC m=+158.451544003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.459542 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.459978 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:12.959962935 +0000 UTC m=+158.552978269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.560372 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.560557 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.060505725 +0000 UTC m=+158.653521069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.561461 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.561933 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.061915964 +0000 UTC m=+158.654931298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.662716 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.662892 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.162858606 +0000 UTC m=+158.755873950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.663077 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.664108 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.164051378 +0000 UTC m=+158.757066772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.763878 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.764118 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.264076845 +0000 UTC m=+158.857092179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.764554 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.765015 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.26499754 +0000 UTC m=+158.858012874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.865743 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.866216 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.366176609 +0000 UTC m=+158.959191943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.866424 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.866869 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.366848397 +0000 UTC m=+158.959863721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.951312 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:12 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:12 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:12 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.951414 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.967553 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.967852 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.46782722 +0000 UTC m=+159.060842554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:12 crc kubenswrapper[4799]: I0216 12:34:12.968074 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:12 crc kubenswrapper[4799]: E0216 12:34:12.968412 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.468403206 +0000 UTC m=+159.061418540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.069537 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.070254 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.570220411 +0000 UTC m=+159.163235775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.171514 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.172293 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.672270234 +0000 UTC m=+159.265285558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.204428 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" event={"ID":"e1034942-eeca-4ab3-a189-32674858ffac","Type":"ContainerStarted","Data":"98d685bd820c7c8ea66e68b72d2a064cf24842e23fa2a00cc97110258e6fad03"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.205958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" event={"ID":"a06b895d-be38-4663-b92c-172f8a2bbe9d","Type":"ContainerStarted","Data":"60346c5cde5f1275c4c532078fea003502f49f2533d7d828d7419d42677c361c"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.208360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" event={"ID":"63c1f2e4-699c-432e-af51-332bb6e33ba0","Type":"ContainerStarted","Data":"113db1954b06314e7859ed868150d0258f0e986eecab934cae24f67c335be80a"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.208548 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.210820 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" event={"ID":"6241a3a8-9b40-468b-b9a2-bc51a9eb0875","Type":"ContainerStarted","Data":"e5a253a74cfbd728cde51011f1f057444aa665a7938b265e18b7ca41a6091d9b"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.212663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" event={"ID":"495ad454-0421-4d3a-9488-8923702281c2","Type":"ContainerStarted","Data":"ec04760e13dc662ee5a034868ce4261fb5ebc56dcbbb295730afc0d2850ffebb"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.214919 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" event={"ID":"558a0cf2-bf71-43d4-8f20-aefcfa10cda4","Type":"ContainerStarted","Data":"0a93b6c086f78dd6730b57c25e7f912131ec2400b55efef067122eefaf80549f"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.216756 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ndp46" event={"ID":"d46c8684-5e51-4f95-8a90-68e76d701a6a","Type":"ContainerStarted","Data":"fd267d769049d038e6b39fa4fbeeac597d432906df4d8d0542656476a761bba5"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.217216 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.222687 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" event={"ID":"60cca0b1-26dd-4ae6-a8df-921ad61f7732","Type":"ContainerStarted","Data":"24760fddf89d4a45d064de0d676cf245bd06bb3f298c45e3837ce49e3adfd7e9"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.225283 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" event={"ID":"d32fb0f0-4200-401b-803e-a52704008663","Type":"ContainerStarted","Data":"ad6039c472d215a72d3d856f4e45492684fd64f6287693648dc2c51a9d5b5c97"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.227222 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" event={"ID":"ea2b5f46-58b6-41f8-9985-85d5236568ef","Type":"ContainerStarted","Data":"8bbf57d40d90afc627265d85de582a6374fda2500e4d5f4dfcb80f687b2091ce"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.228006 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.230713 4799 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-66brb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.230765 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.232503 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" event={"ID":"62c4c2ac-a865-431e-9bce-4e69e7054888","Type":"ContainerStarted","Data":"83901e8d02f79b3e9322e79aa76b6b5b86ae326675280f14a0007c423d26cd4c"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.232536 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" event={"ID":"62c4c2ac-a865-431e-9bce-4e69e7054888","Type":"ContainerStarted","Data":"99fa29fe7dd9aa61efc95828aed04577ec6b5df2e77b4dab148a7b82789da3e8"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.238596 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" event={"ID":"d6f0cb13-521b-4df1-bf03-f9161042d3d9","Type":"ContainerStarted","Data":"0d76f605365171cd2204530e4b499483b4330a91df0a35372324a4e214a5b9d3"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.238652 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" event={"ID":"d6f0cb13-521b-4df1-bf03-f9161042d3d9","Type":"ContainerStarted","Data":"4a1293c82c4a82c707ca41872a4bd668da15a3421a33ca725b8b2102a9cc197b"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.242750 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" event={"ID":"53b4dc5c-f859-4919-873c-46bf2ce1d4ea","Type":"ContainerStarted","Data":"5527092bc03ccf5b44e8f8e166298f28d3c3a6969a06c032276cb91fd4a32d81"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.243763 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.248415 4799 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-smfjj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.248491 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" podUID="53b4dc5c-f859-4919-873c-46bf2ce1d4ea" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.250455 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" event={"ID":"6e9b7ea2-185b-443f-8aca-7286501b2a80","Type":"ContainerStarted","Data":"db5289526292369c62f188ca4ebc443f52d129d51a05092d969277bbf3b1614b"} Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.251175 4799 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lrtf8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.251234 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" podUID="0c9c0115-dd3c-46a8-b9a9-68a6d461d0bb" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.251730 4799 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wrg52 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.251761 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.267287 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vdgfq" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.268470 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.272579 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.275108 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.775092757 +0000 UTC m=+159.368108091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.333444 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-znqn5" podStartSLOduration=137.333404332 podStartE2EDuration="2m17.333404332s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.248287633 +0000 UTC m=+158.841302967" watchObservedRunningTime="2026-02-16 12:34:13.333404332 +0000 UTC m=+158.926419666" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.337506 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kh4g" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.381512 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.381996 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.881979191 +0000 UTC m=+159.474994525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.402054 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4mc" podStartSLOduration=137.402016029 podStartE2EDuration="2m17.402016029s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.322672229 +0000 UTC m=+158.915687563" watchObservedRunningTime="2026-02-16 12:34:13.402016029 +0000 UTC m=+158.995031403" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.406392 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" podStartSLOduration=137.406378339 podStartE2EDuration="2m17.406378339s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.399689256 +0000 UTC m=+158.992704590" watchObservedRunningTime="2026-02-16 12:34:13.406378339 +0000 UTC m=+158.999393673" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.442520 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ndp46" podStartSLOduration=9.442495067 podStartE2EDuration="9.442495067s" podCreationTimestamp="2026-02-16 12:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.440755479 +0000 UTC m=+159.033770813" watchObservedRunningTime="2026-02-16 12:34:13.442495067 +0000 UTC m=+159.035510401" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.449744 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-twvg6" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.468463 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cp4k6" podStartSLOduration=137.468440227 podStartE2EDuration="2m17.468440227s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.467896432 +0000 UTC m=+159.060911766" watchObservedRunningTime="2026-02-16 12:34:13.468440227 +0000 UTC m=+159.061455561" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.482360 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.482693 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:13.982655536 +0000 UTC m=+159.575670870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.522965 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" podStartSLOduration=137.522942098 podStartE2EDuration="2m17.522942098s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.521327234 +0000 UTC m=+159.114342568" watchObservedRunningTime="2026-02-16 12:34:13.522942098 +0000 UTC m=+159.115957432" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.560918 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6w2wm" podStartSLOduration=137.560886766 podStartE2EDuration="2m17.560886766s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.560483485 +0000 UTC m=+159.153498819" watchObservedRunningTime="2026-02-16 12:34:13.560886766 +0000 UTC m=+159.153902090" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.583504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.584104 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.084082171 +0000 UTC m=+159.677097505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.616965 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cbjpn" podStartSLOduration=137.61693783 podStartE2EDuration="2m17.61693783s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.613808524 +0000 UTC m=+159.206823858" watchObservedRunningTime="2026-02-16 12:34:13.61693783 +0000 UTC m=+159.209953164" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.684185 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.684456 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.184439147 +0000 UTC m=+159.777454471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.698558 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2df75" podStartSLOduration=137.698541163 podStartE2EDuration="2m17.698541163s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.645352667 +0000 UTC m=+159.238368011" watchObservedRunningTime="2026-02-16 12:34:13.698541163 +0000 UTC m=+159.291556497" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.698776 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-77v9m" podStartSLOduration=136.698770419 podStartE2EDuration="2m16.698770419s" podCreationTimestamp="2026-02-16 12:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.697742881 +0000 UTC m=+159.290758215" watchObservedRunningTime="2026-02-16 12:34:13.698770419 +0000 UTC m=+159.291785753" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.727990 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" podStartSLOduration=137.727971078 podStartE2EDuration="2m17.727971078s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.724919714 +0000 UTC m=+159.317935048" watchObservedRunningTime="2026-02-16 12:34:13.727971078 +0000 UTC m=+159.320986412" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.770737 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-87s27" podStartSLOduration=137.770721397 podStartE2EDuration="2m17.770721397s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.764502567 +0000 UTC m=+159.357517901" watchObservedRunningTime="2026-02-16 12:34:13.770721397 +0000 UTC m=+159.363736731" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.785675 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.785976 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.285965055 +0000 UTC m=+159.878980389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.889772 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.890195 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.390174696 +0000 UTC m=+159.983190030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.952367 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:13 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:13 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:13 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.952454 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:13 crc kubenswrapper[4799]: I0216 12:34:13.991512 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:13 crc kubenswrapper[4799]: E0216 12:34:13.991867 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.491852528 +0000 UTC m=+160.084867862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.016727 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" podStartSLOduration=137.016709628 podStartE2EDuration="2m17.016709628s" podCreationTimestamp="2026-02-16 12:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:13.976003244 +0000 UTC m=+159.569018578" watchObservedRunningTime="2026-02-16 12:34:14.016709628 +0000 UTC m=+159.609724962" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.092581 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.093170 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.593140129 +0000 UTC m=+160.186155463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.093269 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.093621 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.593613402 +0000 UTC m=+160.186628736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.194568 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.194798 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.694758449 +0000 UTC m=+160.287773783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.194965 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.195421 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.695411497 +0000 UTC m=+160.288426831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.275526 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" event={"ID":"a06b895d-be38-4663-b92c-172f8a2bbe9d","Type":"ContainerStarted","Data":"3e4ae9931692a045b5a4e3fcfe4a53fd1b4f7cc9926c522f579dcb56d1c303a7"} Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.277644 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.279050 4799 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-66brb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.279138 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.285429 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.296929 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.298736 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.798716494 +0000 UTC m=+160.391731828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.324771 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.407422 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.407822 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:14.907806398 +0000 UTC m=+160.500821732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.449453 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fs5dc"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.450869 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.466613 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.498575 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fs5dc"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.510667 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.510805 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skx5j\" (UniqueName: \"kubernetes.io/projected/3c8b6238-00b9-48d2-b1f5-4375b0555da6-kube-api-access-skx5j\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.510846 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-catalog-content\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.510901 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-utilities\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.511007 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.010986731 +0000 UTC m=+160.604002065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.597351 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xm7s"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.598378 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.605674 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.611755 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-catalog-content\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.611806 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.611843 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-utilities\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.611882 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skx5j\" (UniqueName: \"kubernetes.io/projected/3c8b6238-00b9-48d2-b1f5-4375b0555da6-kube-api-access-skx5j\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.612692 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-catalog-content\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.612959 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.112947911 +0000 UTC m=+160.705963235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.613307 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-utilities\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.637660 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xm7s"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.665530 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skx5j\" (UniqueName: \"kubernetes.io/projected/3c8b6238-00b9-48d2-b1f5-4375b0555da6-kube-api-access-skx5j\") pod \"certified-operators-fs5dc\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.712828 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.713066 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.21304565 +0000 UTC m=+160.806060984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.713672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-utilities\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.713758 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.713792 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-catalog-content\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.713823 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4729\" (UniqueName: \"kubernetes.io/projected/6734f76c-775d-47c3-8c54-e7c3e25a4575-kube-api-access-h4729\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.714296 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.214281304 +0000 UTC m=+160.807296638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.775278 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-smfjj" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.784235 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtx46"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.787097 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.799176 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.805964 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtx46"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.815029 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.815343 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-utilities\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.815388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-catalog-content\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.815416 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4729\" (UniqueName: \"kubernetes.io/projected/6734f76c-775d-47c3-8c54-e7c3e25a4575-kube-api-access-h4729\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.815930 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.315911324 +0000 UTC m=+160.908926658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.816371 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-catalog-content\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.816754 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-utilities\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.848113 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4729\" (UniqueName: \"kubernetes.io/projected/6734f76c-775d-47c3-8c54-e7c3e25a4575-kube-api-access-h4729\") pod \"community-operators-9xm7s\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.918295 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-utilities\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.918347 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-catalog-content\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.918396 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.918463 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvlh\" (UniqueName: \"kubernetes.io/projected/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-kube-api-access-sfvlh\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:14 crc kubenswrapper[4799]: E0216 12:34:14.918888 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.418871961 +0000 UTC m=+161.011887295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.928669 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.956341 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:14 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:14 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:14 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.956409 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.981065 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bjtgr"] Feb 16 12:34:14 crc kubenswrapper[4799]: I0216 12:34:14.982155 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.019679 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.019951 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvlh\" (UniqueName: \"kubernetes.io/projected/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-kube-api-access-sfvlh\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.020006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-utilities\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.020025 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-catalog-content\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: E0216 12:34:15.020392 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.520324337 +0000 UTC m=+161.113339671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.020545 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-catalog-content\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.021051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-utilities\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.048820 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvlh\" (UniqueName: \"kubernetes.io/projected/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-kube-api-access-sfvlh\") pod \"certified-operators-vtx46\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.060230 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjtgr"] Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.101671 4799 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.113209 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.124686 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-catalog-content\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.124745 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhfl\" (UniqueName: \"kubernetes.io/projected/0b2108bc-d6b4-4de2-9163-f3d6714155b3-kube-api-access-nnhfl\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.124781 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-utilities\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.124831 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:15 crc kubenswrapper[4799]: E0216 12:34:15.125274 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.625256837 +0000 UTC m=+161.218272161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.226825 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.227300 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-catalog-content\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.227341 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnhfl\" (UniqueName: \"kubernetes.io/projected/0b2108bc-d6b4-4de2-9163-f3d6714155b3-kube-api-access-nnhfl\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.227383 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-utilities\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.227959 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-utilities\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.228145 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-catalog-content\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: E0216 12:34:15.228296 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.728270266 +0000 UTC m=+161.321285600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.265257 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnhfl\" (UniqueName: \"kubernetes.io/projected/0b2108bc-d6b4-4de2-9163-f3d6714155b3-kube-api-access-nnhfl\") pod \"community-operators-bjtgr\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.302379 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.333005 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:15 crc kubenswrapper[4799]: E0216 12:34:15.333492 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:34:15.833473624 +0000 UTC m=+161.426488958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-df4xr" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.372080 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" event={"ID":"a06b895d-be38-4663-b92c-172f8a2bbe9d","Type":"ContainerStarted","Data":"97fa92043957df79cbabb5b565ecd27ec3ddc4f4fc0946d4fc1a44b4cc5f4761"} Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.374101 4799 generic.go:334] "Generic (PLEG): container finished" podID="e6ab08e0-f4bc-4dcc-abaf-876b063165ad" containerID="aaa7a0ce9bbd09bbe65107188212b2ff4c9b1f30ecbce2fafc52dbfbbfd09d09" exitCode=0 Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.374943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" event={"ID":"e6ab08e0-f4bc-4dcc-abaf-876b063165ad","Type":"ContainerDied","Data":"aaa7a0ce9bbd09bbe65107188212b2ff4c9b1f30ecbce2fafc52dbfbbfd09d09"} Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.398356 4799 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T12:34:15.101698543Z","Handler":null,"Name":""} Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.399473 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.417043 4799 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.417102 4799 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.433734 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.447721 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.538945 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.541239 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fs5dc"] Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.555066 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xm7s"] Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.567115 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.567175 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.696178 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-df4xr\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.754147 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtx46"] Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.890864 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.953864 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:15 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:15 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:15 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.953948 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:15 crc kubenswrapper[4799]: I0216 12:34:15.991622 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjtgr"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.174588 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.182380 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.185942 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.186025 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.224386 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.328393 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.340269 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gm29d" Feb 16 12:34:16 crc kubenswrapper[4799]: E0216 12:34:16.358726 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9501d397_7cf8_4712_b7bf_be0fc0c5eca4.slice/crio-df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9501d397_7cf8_4712_b7bf_be0fc0c5eca4.slice/crio-conmon-df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.368982 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c0630c-ce47-47ba-9135-df6b7a13931a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.369093 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c0630c-ce47-47ba-9135-df6b7a13931a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.373088 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-df4xr"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.399205 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wfjv"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.403069 4799 generic.go:334] "Generic (PLEG): container finished" podID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerID="fda4872590e9956393bc29d7b49a0aaa50db46d4aa6b7ba663e882b3770dd433" exitCode=0 Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.411373 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm7s" event={"ID":"6734f76c-775d-47c3-8c54-e7c3e25a4575","Type":"ContainerDied","Data":"fda4872590e9956393bc29d7b49a0aaa50db46d4aa6b7ba663e882b3770dd433"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.411426 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm7s" event={"ID":"6734f76c-775d-47c3-8c54-e7c3e25a4575","Type":"ContainerStarted","Data":"97eb50f0ceb673e0bfa79bc99647a6688866b452c0030b9eeb98de3127f3449e"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.411528 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.414093 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.421530 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.434105 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wfjv"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.457391 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerID="6ce086a1221dac988b6bbc2dea1b0c85457f5e96a0193701021cac5ab0172fee" exitCode=0 Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.457576 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjtgr" event={"ID":"0b2108bc-d6b4-4de2-9163-f3d6714155b3","Type":"ContainerDied","Data":"6ce086a1221dac988b6bbc2dea1b0c85457f5e96a0193701021cac5ab0172fee"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.457620 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjtgr" event={"ID":"0b2108bc-d6b4-4de2-9163-f3d6714155b3","Type":"ContainerStarted","Data":"24e8608df9d74d5f2c16cf2efa9af8c60e0eced2439b40e22c7959c84c88e7be"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.470253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c0630c-ce47-47ba-9135-df6b7a13931a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.470401 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c0630c-ce47-47ba-9135-df6b7a13931a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.470531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c0630c-ce47-47ba-9135-df6b7a13931a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.504138 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" event={"ID":"a06b895d-be38-4663-b92c-172f8a2bbe9d","Type":"ContainerStarted","Data":"40dd637f2ed8321a0afd284b10de21cc45ad21173f0c189f2c7825deb0b867b6"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.512365 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c0630c-ce47-47ba-9135-df6b7a13931a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.515331 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerID="eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77" exitCode=0 Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.515429 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fs5dc" event={"ID":"3c8b6238-00b9-48d2-b1f5-4375b0555da6","Type":"ContainerDied","Data":"eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.515466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fs5dc" event={"ID":"3c8b6238-00b9-48d2-b1f5-4375b0555da6","Type":"ContainerStarted","Data":"741d0600e89e4abd70a1a39e77f9300c434d4f5731aa67077c4fdc3815063000"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.538665 4799 generic.go:334] "Generic (PLEG): container finished" podID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerID="df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c" exitCode=0 Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.539858 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtx46" event={"ID":"9501d397-7cf8-4712-b7bf-be0fc0c5eca4","Type":"ContainerDied","Data":"df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.539889 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtx46" event={"ID":"9501d397-7cf8-4712-b7bf-be0fc0c5eca4","Type":"ContainerStarted","Data":"4a8e9452e36f4d4e8700f1770befb3aab745d283ad43c75fb78c53fdccce46ea"} Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.571356 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-catalog-content\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.572340 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7t92\" (UniqueName: \"kubernetes.io/projected/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-kube-api-access-g7t92\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.572470 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-utilities\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.595423 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-njdbl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.595512 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-njdbl" podUID="de5f2060-f162-4fac-b3ef-2acda638dfb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.595944 4799 patch_prober.go:28] interesting pod/downloads-7954f5f757-njdbl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.595971 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-njdbl" podUID="de5f2060-f162-4fac-b3ef-2acda638dfb6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.679103 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.680104 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7t92\" (UniqueName: \"kubernetes.io/projected/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-kube-api-access-g7t92\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.680422 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-utilities\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.680683 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-catalog-content\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.684000 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-utilities\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.685064 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-catalog-content\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.717724 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-nkghs" podStartSLOduration=12.717699987 podStartE2EDuration="12.717699987s" podCreationTimestamp="2026-02-16 12:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:16.685867726 +0000 UTC m=+162.278883060" watchObservedRunningTime="2026-02-16 12:34:16.717699987 +0000 UTC m=+162.310715321" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.730982 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7t92\" (UniqueName: \"kubernetes.io/projected/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-kube-api-access-g7t92\") pod \"redhat-marketplace-5wfjv\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.766508 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.776910 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7gh6f"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.783002 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.798233 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gh6f"] Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.887164 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-utilities\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.887244 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-catalog-content\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.887290 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhph4\" (UniqueName: \"kubernetes.io/projected/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-kube-api-access-mhph4\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.988296 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhph4\" (UniqueName: \"kubernetes.io/projected/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-kube-api-access-mhph4\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.988849 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-utilities\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.988892 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-catalog-content\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.989443 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-catalog-content\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:16 crc kubenswrapper[4799]: I0216 12:34:16.989688 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-utilities\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.018911 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhph4\" (UniqueName: \"kubernetes.io/projected/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-kube-api-access-mhph4\") pod \"redhat-marketplace-7gh6f\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.032880 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:17 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:17 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:17 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.032936 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.036396 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.146146 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.158168 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.196275 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b496h\" (UniqueName: \"kubernetes.io/projected/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-kube-api-access-b496h\") pod \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.196410 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-config-volume\") pod \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.196446 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-secret-volume\") pod \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\" (UID: \"e6ab08e0-f4bc-4dcc-abaf-876b063165ad\") " Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.199668 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "e6ab08e0-f4bc-4dcc-abaf-876b063165ad" (UID: "e6ab08e0-f4bc-4dcc-abaf-876b063165ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.208230 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e6ab08e0-f4bc-4dcc-abaf-876b063165ad" (UID: "e6ab08e0-f4bc-4dcc-abaf-876b063165ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.227454 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-kube-api-access-b496h" (OuterVolumeSpecName: "kube-api-access-b496h") pod "e6ab08e0-f4bc-4dcc-abaf-876b063165ad" (UID: "e6ab08e0-f4bc-4dcc-abaf-876b063165ad"). InnerVolumeSpecName "kube-api-access-b496h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.237905 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.298285 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.298686 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.298704 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b496h\" (UniqueName: \"kubernetes.io/projected/e6ab08e0-f4bc-4dcc-abaf-876b063165ad-kube-api-access-b496h\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.515506 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wfjv"] Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.585929 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jgm8v"] Feb 16 12:34:17 crc kubenswrapper[4799]: E0216 12:34:17.586953 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ab08e0-f4bc-4dcc-abaf-876b063165ad" containerName="collect-profiles" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.589706 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ab08e0-f4bc-4dcc-abaf-876b063165ad" containerName="collect-profiles" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.590140 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ab08e0-f4bc-4dcc-abaf-876b063165ad" containerName="collect-profiles" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.591806 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"70c0630c-ce47-47ba-9135-df6b7a13931a","Type":"ContainerStarted","Data":"107c9a34d6fb8ef17abb1752ae0cd7d26605dbe2a7fa8a1ca40c9d617db07db0"} Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.592005 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.595311 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.601431 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.603475 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l" event={"ID":"e6ab08e0-f4bc-4dcc-abaf-876b063165ad","Type":"ContainerDied","Data":"4aa774df418abc094c085268c259f2a1e40ac21e2a4d183eae779a3a199984af"} Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.603545 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa774df418abc094c085268c259f2a1e40ac21e2a4d183eae779a3a199984af" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.615619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgm8v"] Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.615681 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" event={"ID":"67094e0b-8edb-4b4f-aed3-a704b0854384","Type":"ContainerStarted","Data":"b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf"} Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.615711 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" event={"ID":"67094e0b-8edb-4b4f-aed3-a704b0854384","Type":"ContainerStarted","Data":"e13328f6fa153aa7162850c2532a7466a6784c89214dd3062825510645856c68"} Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.616398 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.670533 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" podStartSLOduration=141.670510627 podStartE2EDuration="2m21.670510627s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:17.665751337 +0000 UTC m=+163.258766661" watchObservedRunningTime="2026-02-16 12:34:17.670510627 +0000 UTC m=+163.263525981" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.671757 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wfjv" event={"ID":"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d","Type":"ContainerStarted","Data":"e5e9552ed59b81498288a081b92a9f1d6297ce7c836a37cdb5dface96d4b7791"} Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.713789 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-utilities\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.713847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-catalog-content\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.713979 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnxs\" (UniqueName: \"kubernetes.io/projected/a302cd9c-7040-4248-8fc0-55d280e45b9e-kube-api-access-btnxs\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.755465 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.756474 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.761261 4799 patch_prober.go:28] interesting pod/console-f9d7485db-kkq5f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.761327 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kkq5f" podUID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.798860 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gh6f"] Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.815226 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnxs\" (UniqueName: \"kubernetes.io/projected/a302cd9c-7040-4248-8fc0-55d280e45b9e-kube-api-access-btnxs\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.815303 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-utilities\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.815329 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-catalog-content\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.816748 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-utilities\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.817486 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-catalog-content\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.851592 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrtf8" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.858611 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnxs\" (UniqueName: \"kubernetes.io/projected/a302cd9c-7040-4248-8fc0-55d280e45b9e-kube-api-access-btnxs\") pod \"redhat-operators-jgm8v\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.946377 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.955607 4799 patch_prober.go:28] interesting pod/router-default-5444994796-nwzhj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:34:17 crc kubenswrapper[4799]: [-]has-synced failed: reason withheld Feb 16 12:34:17 crc kubenswrapper[4799]: [+]process-running ok Feb 16 12:34:17 crc kubenswrapper[4799]: healthz check failed Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.955691 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nwzhj" podUID="3cf88c98-4151-445d-918e-8b31e853f3f8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.985039 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8p7r7"] Feb 16 12:34:17 crc kubenswrapper[4799]: I0216 12:34:17.989025 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.010065 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8p7r7"] Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.049613 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.122743 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-utilities\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.123364 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4m8p\" (UniqueName: \"kubernetes.io/projected/8c24a8cb-4a90-46d9-a128-64c6b00fa185-kube-api-access-p4m8p\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.123418 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-catalog-content\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.228668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-catalog-content\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.228785 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-utilities\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.228908 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4m8p\" (UniqueName: \"kubernetes.io/projected/8c24a8cb-4a90-46d9-a128-64c6b00fa185-kube-api-access-p4m8p\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.229751 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-utilities\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.229743 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-catalog-content\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.279109 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4m8p\" (UniqueName: \"kubernetes.io/projected/8c24a8cb-4a90-46d9-a128-64c6b00fa185-kube-api-access-p4m8p\") pod \"redhat-operators-8p7r7\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.319513 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.510450 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgm8v"] Feb 16 12:34:18 crc kubenswrapper[4799]: W0216 12:34:18.556688 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda302cd9c_7040_4248_8fc0_55d280e45b9e.slice/crio-9a1fea5518c41a939ea61102a13d633354cf10c2bb29fee60a196969b930fd2f WatchSource:0}: Error finding container 9a1fea5518c41a939ea61102a13d633354cf10c2bb29fee60a196969b930fd2f: Status 404 returned error can't find the container with id 9a1fea5518c41a939ea61102a13d633354cf10c2bb29fee60a196969b930fd2f Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.726009 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerStarted","Data":"9a1fea5518c41a939ea61102a13d633354cf10c2bb29fee60a196969b930fd2f"} Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.728914 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerStarted","Data":"c081e91ae5539389bc5835ebbb16d3755fb335e320b26524c0a138ea0b26ab69"} Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.738437 4799 generic.go:334] "Generic (PLEG): container finished" podID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerID="c0b552150b5f198afb5d2bc1132d179dd7571b4044a8b4c3fd15530321dcbba2" exitCode=0 Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.738609 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wfjv" event={"ID":"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d","Type":"ContainerDied","Data":"c0b552150b5f198afb5d2bc1132d179dd7571b4044a8b4c3fd15530321dcbba2"} Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.742612 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"70c0630c-ce47-47ba-9135-df6b7a13931a","Type":"ContainerStarted","Data":"72b4bcd09a94715ca405428de6304797d410cf6a94d65c87bf48f2a26d0db95f"} Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.926634 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8p7r7"] Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.955715 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:18 crc kubenswrapper[4799]: I0216 12:34:18.960140 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nwzhj" Feb 16 12:34:19 crc kubenswrapper[4799]: W0216 12:34:19.020892 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c24a8cb_4a90_46d9_a128_64c6b00fa185.slice/crio-8ea488d0c7747f7cb18d59e1c9d99ae36ae54bca2eec8f2f7fd46e9f25ac245b WatchSource:0}: Error finding container 8ea488d0c7747f7cb18d59e1c9d99ae36ae54bca2eec8f2f7fd46e9f25ac245b: Status 404 returned error can't find the container with id 8ea488d0c7747f7cb18d59e1c9d99ae36ae54bca2eec8f2f7fd46e9f25ac245b Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.572748 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.591382 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd-metrics-certs\") pod \"network-metrics-daemon-2clkm\" (UID: \"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd\") " pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.772938 4799 generic.go:334] "Generic (PLEG): container finished" podID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerID="4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e" exitCode=0 Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.773091 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p7r7" event={"ID":"8c24a8cb-4a90-46d9-a128-64c6b00fa185","Type":"ContainerDied","Data":"4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e"} Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.773143 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p7r7" event={"ID":"8c24a8cb-4a90-46d9-a128-64c6b00fa185","Type":"ContainerStarted","Data":"8ea488d0c7747f7cb18d59e1c9d99ae36ae54bca2eec8f2f7fd46e9f25ac245b"} Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.779705 4799 generic.go:334] "Generic (PLEG): container finished" podID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerID="0dec1c5253cdd1366578b3de392f0591363f882ca3140969eaf142c95d0286a0" exitCode=0 Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.779753 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerDied","Data":"0dec1c5253cdd1366578b3de392f0591363f882ca3140969eaf142c95d0286a0"} Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.821546 4799 generic.go:334] "Generic (PLEG): container finished" podID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerID="e6089210bce3de291b99e4495d9b6ba2ca1a4f04578c729a3c8d2ddc55ac8ea7" exitCode=0 Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.822311 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerDied","Data":"e6089210bce3de291b99e4495d9b6ba2ca1a4f04578c729a3c8d2ddc55ac8ea7"} Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.853030 4799 generic.go:334] "Generic (PLEG): container finished" podID="70c0630c-ce47-47ba-9135-df6b7a13931a" containerID="72b4bcd09a94715ca405428de6304797d410cf6a94d65c87bf48f2a26d0db95f" exitCode=0 Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.853966 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"70c0630c-ce47-47ba-9135-df6b7a13931a","Type":"ContainerDied","Data":"72b4bcd09a94715ca405428de6304797d410cf6a94d65c87bf48f2a26d0db95f"} Feb 16 12:34:19 crc kubenswrapper[4799]: I0216 12:34:19.870417 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2clkm" Feb 16 12:34:20 crc kubenswrapper[4799]: I0216 12:34:20.347302 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2clkm"] Feb 16 12:34:20 crc kubenswrapper[4799]: I0216 12:34:20.869415 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2clkm" event={"ID":"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd","Type":"ContainerStarted","Data":"d4493ced6720084f77eb8321903fd8862c3608d6af4e88a4979f62f58b940163"} Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.383422 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.437919 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c0630c-ce47-47ba-9135-df6b7a13931a-kube-api-access\") pod \"70c0630c-ce47-47ba-9135-df6b7a13931a\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.438040 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c0630c-ce47-47ba-9135-df6b7a13931a-kubelet-dir\") pod \"70c0630c-ce47-47ba-9135-df6b7a13931a\" (UID: \"70c0630c-ce47-47ba-9135-df6b7a13931a\") " Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.438235 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c0630c-ce47-47ba-9135-df6b7a13931a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70c0630c-ce47-47ba-9135-df6b7a13931a" (UID: "70c0630c-ce47-47ba-9135-df6b7a13931a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.438755 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70c0630c-ce47-47ba-9135-df6b7a13931a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.460089 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c0630c-ce47-47ba-9135-df6b7a13931a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70c0630c-ce47-47ba-9135-df6b7a13931a" (UID: "70c0630c-ce47-47ba-9135-df6b7a13931a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.540240 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70c0630c-ce47-47ba-9135-df6b7a13931a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.585085 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:34:21 crc kubenswrapper[4799]: E0216 12:34:21.587731 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c0630c-ce47-47ba-9135-df6b7a13931a" containerName="pruner" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.587745 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c0630c-ce47-47ba-9135-df6b7a13931a" containerName="pruner" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.587844 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c0630c-ce47-47ba-9135-df6b7a13931a" containerName="pruner" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.588325 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.592785 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.593116 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.595815 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.651820 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac50c049-b50f-412b-85b7-aa9a562f92d6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.654344 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac50c049-b50f-412b-85b7-aa9a562f92d6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.756522 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac50c049-b50f-412b-85b7-aa9a562f92d6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.756694 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac50c049-b50f-412b-85b7-aa9a562f92d6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.756715 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac50c049-b50f-412b-85b7-aa9a562f92d6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.799075 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.799168 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.803866 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac50c049-b50f-412b-85b7-aa9a562f92d6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.881054 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2clkm" event={"ID":"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd","Type":"ContainerStarted","Data":"cbd0a818e9727f442d7bf856c2a414227b441be499a7bfea3d752c86ec8d898b"} Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.881109 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2clkm" event={"ID":"e9700d1f-c0e3-4e3b-ae76-4c80460ccdbd","Type":"ContainerStarted","Data":"e5741f5305b708e5e649f4655cb8f6352811f4a30b9d133c672fb0674dad4467"} Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.895415 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"70c0630c-ce47-47ba-9135-df6b7a13931a","Type":"ContainerDied","Data":"107c9a34d6fb8ef17abb1752ae0cd7d26605dbe2a7fa8a1ca40c9d617db07db0"} Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.895482 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="107c9a34d6fb8ef17abb1752ae0cd7d26605dbe2a7fa8a1ca40c9d617db07db0" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.895574 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.896841 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2clkm" podStartSLOduration=145.89681702 podStartE2EDuration="2m25.89681702s" podCreationTimestamp="2026-02-16 12:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:21.896480181 +0000 UTC m=+167.489495515" watchObservedRunningTime="2026-02-16 12:34:21.89681702 +0000 UTC m=+167.489832354" Feb 16 12:34:21 crc kubenswrapper[4799]: I0216 12:34:21.918251 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:22 crc kubenswrapper[4799]: I0216 12:34:22.377061 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:34:22 crc kubenswrapper[4799]: W0216 12:34:22.522265 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podac50c049_b50f_412b_85b7_aa9a562f92d6.slice/crio-739d29764b97ff6b39ef8a2e6281cf856326c3e96f44bed3a7108f9bca0a0ab7 WatchSource:0}: Error finding container 739d29764b97ff6b39ef8a2e6281cf856326c3e96f44bed3a7108f9bca0a0ab7: Status 404 returned error can't find the container with id 739d29764b97ff6b39ef8a2e6281cf856326c3e96f44bed3a7108f9bca0a0ab7 Feb 16 12:34:23 crc kubenswrapper[4799]: I0216 12:34:23.004038 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ac50c049-b50f-412b-85b7-aa9a562f92d6","Type":"ContainerStarted","Data":"739d29764b97ff6b39ef8a2e6281cf856326c3e96f44bed3a7108f9bca0a0ab7"} Feb 16 12:34:23 crc kubenswrapper[4799]: I0216 12:34:23.016904 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ndp46" Feb 16 12:34:24 crc kubenswrapper[4799]: I0216 12:34:24.086595 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ac50c049-b50f-412b-85b7-aa9a562f92d6","Type":"ContainerStarted","Data":"4ae08c56304d727272a30f55a58613e580bd33889da5a9af6b77af57263b31cc"} Feb 16 12:34:24 crc kubenswrapper[4799]: I0216 12:34:24.106846 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.106826886 podStartE2EDuration="3.106826886s" podCreationTimestamp="2026-02-16 12:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:24.103764362 +0000 UTC m=+169.696779696" watchObservedRunningTime="2026-02-16 12:34:24.106826886 +0000 UTC m=+169.699842220" Feb 16 12:34:25 crc kubenswrapper[4799]: I0216 12:34:25.115106 4799 generic.go:334] "Generic (PLEG): container finished" podID="ac50c049-b50f-412b-85b7-aa9a562f92d6" containerID="4ae08c56304d727272a30f55a58613e580bd33889da5a9af6b77af57263b31cc" exitCode=0 Feb 16 12:34:25 crc kubenswrapper[4799]: I0216 12:34:25.115171 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ac50c049-b50f-412b-85b7-aa9a562f92d6","Type":"ContainerDied","Data":"4ae08c56304d727272a30f55a58613e580bd33889da5a9af6b77af57263b31cc"} Feb 16 12:34:26 crc kubenswrapper[4799]: I0216 12:34:26.584371 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-njdbl" Feb 16 12:34:27 crc kubenswrapper[4799]: I0216 12:34:27.761329 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:27 crc kubenswrapper[4799]: I0216 12:34:27.766726 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:34:30 crc kubenswrapper[4799]: I0216 12:34:30.907930 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-66brb"] Feb 16 12:34:30 crc kubenswrapper[4799]: I0216 12:34:30.918351 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs"] Feb 16 12:34:30 crc kubenswrapper[4799]: I0216 12:34:30.918610 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerName="controller-manager" containerID="cri-o://8bbf57d40d90afc627265d85de582a6374fda2500e4d5f4dfcb80f687b2091ce" gracePeriod=30 Feb 16 12:34:30 crc kubenswrapper[4799]: I0216 12:34:30.918746 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" podUID="6e9b7ea2-185b-443f-8aca-7286501b2a80" containerName="route-controller-manager" containerID="cri-o://db5289526292369c62f188ca4ebc443f52d129d51a05092d969277bbf3b1614b" gracePeriod=30 Feb 16 12:34:31 crc kubenswrapper[4799]: I0216 12:34:31.193325 4799 generic.go:334] "Generic (PLEG): container finished" podID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerID="8bbf57d40d90afc627265d85de582a6374fda2500e4d5f4dfcb80f687b2091ce" exitCode=0 Feb 16 12:34:31 crc kubenswrapper[4799]: I0216 12:34:31.193397 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" event={"ID":"ea2b5f46-58b6-41f8-9985-85d5236568ef","Type":"ContainerDied","Data":"8bbf57d40d90afc627265d85de582a6374fda2500e4d5f4dfcb80f687b2091ce"} Feb 16 12:34:31 crc kubenswrapper[4799]: I0216 12:34:31.195910 4799 generic.go:334] "Generic (PLEG): container finished" podID="6e9b7ea2-185b-443f-8aca-7286501b2a80" containerID="db5289526292369c62f188ca4ebc443f52d129d51a05092d969277bbf3b1614b" exitCode=0 Feb 16 12:34:31 crc kubenswrapper[4799]: I0216 12:34:31.195961 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" event={"ID":"6e9b7ea2-185b-443f-8aca-7286501b2a80","Type":"ContainerDied","Data":"db5289526292369c62f188ca4ebc443f52d129d51a05092d969277bbf3b1614b"} Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.275923 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.373449 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac50c049-b50f-412b-85b7-aa9a562f92d6-kube-api-access\") pod \"ac50c049-b50f-412b-85b7-aa9a562f92d6\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.373522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac50c049-b50f-412b-85b7-aa9a562f92d6-kubelet-dir\") pod \"ac50c049-b50f-412b-85b7-aa9a562f92d6\" (UID: \"ac50c049-b50f-412b-85b7-aa9a562f92d6\") " Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.373957 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac50c049-b50f-412b-85b7-aa9a562f92d6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ac50c049-b50f-412b-85b7-aa9a562f92d6" (UID: "ac50c049-b50f-412b-85b7-aa9a562f92d6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.380977 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac50c049-b50f-412b-85b7-aa9a562f92d6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ac50c049-b50f-412b-85b7-aa9a562f92d6" (UID: "ac50c049-b50f-412b-85b7-aa9a562f92d6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.476036 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac50c049-b50f-412b-85b7-aa9a562f92d6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:32 crc kubenswrapper[4799]: I0216 12:34:32.476110 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac50c049-b50f-412b-85b7-aa9a562f92d6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:33 crc kubenswrapper[4799]: I0216 12:34:33.220527 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ac50c049-b50f-412b-85b7-aa9a562f92d6","Type":"ContainerDied","Data":"739d29764b97ff6b39ef8a2e6281cf856326c3e96f44bed3a7108f9bca0a0ab7"} Feb 16 12:34:33 crc kubenswrapper[4799]: I0216 12:34:33.220590 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="739d29764b97ff6b39ef8a2e6281cf856326c3e96f44bed3a7108f9bca0a0ab7" Feb 16 12:34:33 crc kubenswrapper[4799]: I0216 12:34:33.220708 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:34:33 crc kubenswrapper[4799]: I0216 12:34:33.525292 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:34:35 crc kubenswrapper[4799]: I0216 12:34:35.898494 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.006655 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.013278 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.049361 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55c5f675c5-s4zw9"] Feb 16 12:34:38 crc kubenswrapper[4799]: E0216 12:34:38.049828 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b7ea2-185b-443f-8aca-7286501b2a80" containerName="route-controller-manager" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.049850 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b7ea2-185b-443f-8aca-7286501b2a80" containerName="route-controller-manager" Feb 16 12:34:38 crc kubenswrapper[4799]: E0216 12:34:38.049880 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac50c049-b50f-412b-85b7-aa9a562f92d6" containerName="pruner" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.049893 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac50c049-b50f-412b-85b7-aa9a562f92d6" containerName="pruner" Feb 16 12:34:38 crc kubenswrapper[4799]: E0216 12:34:38.049921 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerName="controller-manager" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.049934 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerName="controller-manager" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.050120 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" containerName="controller-manager" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.050170 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50c049-b50f-412b-85b7-aa9a562f92d6" containerName="pruner" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.050189 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b7ea2-185b-443f-8aca-7286501b2a80" containerName="route-controller-manager" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.050879 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55c5f675c5-s4zw9"] Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.051000 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080375 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69htr\" (UniqueName: \"kubernetes.io/projected/ea2b5f46-58b6-41f8-9985-85d5236568ef-kube-api-access-69htr\") pod \"ea2b5f46-58b6-41f8-9985-85d5236568ef\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080475 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config\") pod \"ea2b5f46-58b6-41f8-9985-85d5236568ef\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080518 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert\") pod \"6e9b7ea2-185b-443f-8aca-7286501b2a80\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080557 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") pod \"ea2b5f46-58b6-41f8-9985-85d5236568ef\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080594 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") pod \"ea2b5f46-58b6-41f8-9985-85d5236568ef\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080633 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4nnb\" (UniqueName: \"kubernetes.io/projected/6e9b7ea2-185b-443f-8aca-7286501b2a80-kube-api-access-k4nnb\") pod \"6e9b7ea2-185b-443f-8aca-7286501b2a80\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080675 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca\") pod \"ea2b5f46-58b6-41f8-9985-85d5236568ef\" (UID: \"ea2b5f46-58b6-41f8-9985-85d5236568ef\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080710 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") pod \"6e9b7ea2-185b-443f-8aca-7286501b2a80\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080735 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config\") pod \"6e9b7ea2-185b-443f-8aca-7286501b2a80\" (UID: \"6e9b7ea2-185b-443f-8aca-7286501b2a80\") " Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080892 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-config\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080915 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6640870e-ae01-4993-932a-f688af106bc8-serving-cert\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080937 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-client-ca\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.080970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4r88\" (UniqueName: \"kubernetes.io/projected/6640870e-ae01-4993-932a-f688af106bc8-kube-api-access-g4r88\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.081008 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-proxy-ca-bundles\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.081910 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ea2b5f46-58b6-41f8-9985-85d5236568ef" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.082041 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e9b7ea2-185b-443f-8aca-7286501b2a80" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.081970 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "ea2b5f46-58b6-41f8-9985-85d5236568ef" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.082006 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config" (OuterVolumeSpecName: "config") pod "6e9b7ea2-185b-443f-8aca-7286501b2a80" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.082741 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config" (OuterVolumeSpecName: "config") pod "ea2b5f46-58b6-41f8-9985-85d5236568ef" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.089622 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2b5f46-58b6-41f8-9985-85d5236568ef-kube-api-access-69htr" (OuterVolumeSpecName: "kube-api-access-69htr") pod "ea2b5f46-58b6-41f8-9985-85d5236568ef" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef"). InnerVolumeSpecName "kube-api-access-69htr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.089715 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b7ea2-185b-443f-8aca-7286501b2a80-kube-api-access-k4nnb" (OuterVolumeSpecName: "kube-api-access-k4nnb") pod "6e9b7ea2-185b-443f-8aca-7286501b2a80" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80"). InnerVolumeSpecName "kube-api-access-k4nnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.091521 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e9b7ea2-185b-443f-8aca-7286501b2a80" (UID: "6e9b7ea2-185b-443f-8aca-7286501b2a80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.092234 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ea2b5f46-58b6-41f8-9985-85d5236568ef" (UID: "ea2b5f46-58b6-41f8-9985-85d5236568ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181699 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6640870e-ae01-4993-932a-f688af106bc8-serving-cert\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181754 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-config\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181778 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-client-ca\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181822 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4r88\" (UniqueName: \"kubernetes.io/projected/6640870e-ae01-4993-932a-f688af106bc8-kube-api-access-g4r88\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181874 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-proxy-ca-bundles\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181936 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181959 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69htr\" (UniqueName: \"kubernetes.io/projected/ea2b5f46-58b6-41f8-9985-85d5236568ef-kube-api-access-69htr\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181973 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.181984 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9b7ea2-185b-443f-8aca-7286501b2a80-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.182497 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2b5f46-58b6-41f8-9985-85d5236568ef-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.182514 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.182528 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4nnb\" (UniqueName: \"kubernetes.io/projected/6e9b7ea2-185b-443f-8aca-7286501b2a80-kube-api-access-k4nnb\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.182540 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea2b5f46-58b6-41f8-9985-85d5236568ef-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.182552 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9b7ea2-185b-443f-8aca-7286501b2a80-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.183578 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-client-ca\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.183689 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-config\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.183741 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-proxy-ca-bundles\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.187201 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6640870e-ae01-4993-932a-f688af106bc8-serving-cert\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.200346 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4r88\" (UniqueName: \"kubernetes.io/projected/6640870e-ae01-4993-932a-f688af106bc8-kube-api-access-g4r88\") pod \"controller-manager-55c5f675c5-s4zw9\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.275304 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" event={"ID":"6e9b7ea2-185b-443f-8aca-7286501b2a80","Type":"ContainerDied","Data":"bc81dd9dd523a0af1ccc6b32185e09b5aad807d6759a5bf13e96c3bf5ddceaee"} Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.275349 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.275397 4799 scope.go:117] "RemoveContainer" containerID="db5289526292369c62f188ca4ebc443f52d129d51a05092d969277bbf3b1614b" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.278052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" event={"ID":"ea2b5f46-58b6-41f8-9985-85d5236568ef","Type":"ContainerDied","Data":"dd30f27eb757cd5260b0ac03e6fa0046041a656fec6c68155f37987b7202ba84"} Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.278166 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-66brb" Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.312296 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-66brb"] Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.315683 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-66brb"] Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.329026 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs"] Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.332072 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sx8cs"] Feb 16 12:34:38 crc kubenswrapper[4799]: I0216 12:34:38.371576 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:39 crc kubenswrapper[4799]: I0216 12:34:39.160226 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b7ea2-185b-443f-8aca-7286501b2a80" path="/var/lib/kubelet/pods/6e9b7ea2-185b-443f-8aca-7286501b2a80/volumes" Feb 16 12:34:39 crc kubenswrapper[4799]: I0216 12:34:39.160988 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2b5f46-58b6-41f8-9985-85d5236568ef" path="/var/lib/kubelet/pods/ea2b5f46-58b6-41f8-9985-85d5236568ef/volumes" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.247109 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg"] Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.248566 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.254209 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg"] Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.287222 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.287456 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.287610 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.287747 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.287824 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.287911 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.347223 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-config\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.347289 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-serving-cert\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.347321 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzzc\" (UniqueName: \"kubernetes.io/projected/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-kube-api-access-xvzzc\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.347412 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-client-ca\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.449401 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-config\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.449460 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-serving-cert\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.449488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzzc\" (UniqueName: \"kubernetes.io/projected/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-kube-api-access-xvzzc\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.449538 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-client-ca\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.450492 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-client-ca\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.451917 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-config\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.458407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-serving-cert\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.466904 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzzc\" (UniqueName: \"kubernetes.io/projected/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-kube-api-access-xvzzc\") pod \"route-controller-manager-5ddcfd97cc-fv9rg\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:42 crc kubenswrapper[4799]: I0216 12:34:42.612387 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:47 crc kubenswrapper[4799]: I0216 12:34:47.870644 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5p95v" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.014132 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.014901 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7t92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5wfjv_openshift-marketplace(897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.016331 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5wfjv" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" Feb 16 12:34:48 crc kubenswrapper[4799]: I0216 12:34:48.052917 4799 scope.go:117] "RemoveContainer" containerID="8bbf57d40d90afc627265d85de582a6374fda2500e4d5f4dfcb80f687b2091ce" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.091744 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.091993 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnhfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bjtgr_openshift-marketplace(0b2108bc-d6b4-4de2-9163-f3d6714155b3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.093909 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bjtgr" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" Feb 16 12:34:48 crc kubenswrapper[4799]: I0216 12:34:48.360408 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerStarted","Data":"5e18099c8c3de77a6782204e86cca749d55f4559b19474f5729ad59391803fdd"} Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.389697 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bjtgr" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" Feb 16 12:34:48 crc kubenswrapper[4799]: E0216 12:34:48.390862 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5wfjv" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" Feb 16 12:34:48 crc kubenswrapper[4799]: I0216 12:34:48.407276 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55c5f675c5-s4zw9"] Feb 16 12:34:48 crc kubenswrapper[4799]: I0216 12:34:48.719207 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg"] Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.394246 4799 generic.go:334] "Generic (PLEG): container finished" podID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerID="5e18099c8c3de77a6782204e86cca749d55f4559b19474f5729ad59391803fdd" exitCode=0 Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.394602 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerDied","Data":"5e18099c8c3de77a6782204e86cca749d55f4559b19474f5729ad59391803fdd"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.397298 4799 generic.go:334] "Generic (PLEG): container finished" podID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerID="e36753c15a934e39445060934bdf3ccabe515ea921f08a19086ebf353adae8a0" exitCode=0 Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.397350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm7s" event={"ID":"6734f76c-775d-47c3-8c54-e7c3e25a4575","Type":"ContainerDied","Data":"e36753c15a934e39445060934bdf3ccabe515ea921f08a19086ebf353adae8a0"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.399230 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" event={"ID":"6640870e-ae01-4993-932a-f688af106bc8","Type":"ContainerStarted","Data":"64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.399255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" event={"ID":"6640870e-ae01-4993-932a-f688af106bc8","Type":"ContainerStarted","Data":"0ba581f215a076752503bfbc024ee0ccc10654a1872c226290797cd3c85d8e37"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.399405 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.400324 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" event={"ID":"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c","Type":"ContainerStarted","Data":"44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.400350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" event={"ID":"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c","Type":"ContainerStarted","Data":"b198a4de47de23389a4d8ae9e630e69e2d4158345e43d3922b11103f88730af8"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.400831 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.404758 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerID="8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8" exitCode=0 Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.404843 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fs5dc" event={"ID":"3c8b6238-00b9-48d2-b1f5-4375b0555da6","Type":"ContainerDied","Data":"8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.406431 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.407164 4799 generic.go:334] "Generic (PLEG): container finished" podID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerID="c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12" exitCode=0 Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.407243 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p7r7" event={"ID":"8c24a8cb-4a90-46d9-a128-64c6b00fa185","Type":"ContainerDied","Data":"c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.410434 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerStarted","Data":"250cb01f10fc3de81937d81b18153c11d970a5768d2282e9142076c8064c3438"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.413668 4799 generic.go:334] "Generic (PLEG): container finished" podID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerID="75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415" exitCode=0 Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.413711 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtx46" event={"ID":"9501d397-7cf8-4712-b7bf-be0fc0c5eca4","Type":"ContainerDied","Data":"75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415"} Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.460585 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.549117 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" podStartSLOduration=19.549099105 podStartE2EDuration="19.549099105s" podCreationTimestamp="2026-02-16 12:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:49.54671227 +0000 UTC m=+195.139727604" watchObservedRunningTime="2026-02-16 12:34:49.549099105 +0000 UTC m=+195.142114439" Feb 16 12:34:49 crc kubenswrapper[4799]: I0216 12:34:49.649405 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" podStartSLOduration=18.649380759 podStartE2EDuration="18.649380759s" podCreationTimestamp="2026-02-16 12:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:49.605654502 +0000 UTC m=+195.198669836" watchObservedRunningTime="2026-02-16 12:34:49.649380759 +0000 UTC m=+195.242396093" Feb 16 12:34:50 crc kubenswrapper[4799]: I0216 12:34:50.424147 4799 generic.go:334] "Generic (PLEG): container finished" podID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerID="250cb01f10fc3de81937d81b18153c11d970a5768d2282e9142076c8064c3438" exitCode=0 Feb 16 12:34:50 crc kubenswrapper[4799]: I0216 12:34:50.424242 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerDied","Data":"250cb01f10fc3de81937d81b18153c11d970a5768d2282e9142076c8064c3438"} Feb 16 12:34:50 crc kubenswrapper[4799]: I0216 12:34:50.876756 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55c5f675c5-s4zw9"] Feb 16 12:34:50 crc kubenswrapper[4799]: I0216 12:34:50.977892 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg"] Feb 16 12:34:51 crc kubenswrapper[4799]: I0216 12:34:51.449051 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm7s" event={"ID":"6734f76c-775d-47c3-8c54-e7c3e25a4575","Type":"ContainerStarted","Data":"9214ed7439fc51c805423078563e24039276b5ac13330c567f3871332ab3dee5"} Feb 16 12:34:51 crc kubenswrapper[4799]: I0216 12:34:51.793643 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:34:51 crc kubenswrapper[4799]: I0216 12:34:51.793753 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:34:52 crc kubenswrapper[4799]: I0216 12:34:52.453973 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" podUID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" containerName="route-controller-manager" containerID="cri-o://44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250" gracePeriod=30 Feb 16 12:34:52 crc kubenswrapper[4799]: I0216 12:34:52.454080 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" podUID="6640870e-ae01-4993-932a-f688af106bc8" containerName="controller-manager" containerID="cri-o://64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e" gracePeriod=30 Feb 16 12:34:52 crc kubenswrapper[4799]: I0216 12:34:52.492750 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xm7s" podStartSLOduration=3.9516603630000002 podStartE2EDuration="38.492716307s" podCreationTimestamp="2026-02-16 12:34:14 +0000 UTC" firstStartedPulling="2026-02-16 12:34:16.413788002 +0000 UTC m=+162.006803336" lastFinishedPulling="2026-02-16 12:34:50.954843946 +0000 UTC m=+196.547859280" observedRunningTime="2026-02-16 12:34:52.491293661 +0000 UTC m=+198.084309035" watchObservedRunningTime="2026-02-16 12:34:52.492716307 +0000 UTC m=+198.085731651" Feb 16 12:34:52 crc kubenswrapper[4799]: I0216 12:34:52.613163 4799 patch_prober.go:28] interesting pod/route-controller-manager-5ddcfd97cc-fv9rg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 16 12:34:52 crc kubenswrapper[4799]: I0216 12:34:52.613571 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" podUID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.492733 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.515400 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.532932 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk"] Feb 16 12:34:53 crc kubenswrapper[4799]: E0216 12:34:53.533479 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6640870e-ae01-4993-932a-f688af106bc8" containerName="controller-manager" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.533500 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6640870e-ae01-4993-932a-f688af106bc8" containerName="controller-manager" Feb 16 12:34:53 crc kubenswrapper[4799]: E0216 12:34:53.533529 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" containerName="route-controller-manager" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.533539 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" containerName="route-controller-manager" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.541462 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6640870e-ae01-4993-932a-f688af106bc8" containerName="controller-manager" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.541531 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" containerName="route-controller-manager" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.542210 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.561704 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk"] Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.562976 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtx46" event={"ID":"9501d397-7cf8-4712-b7bf-be0fc0c5eca4","Type":"ContainerStarted","Data":"a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.567345 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerStarted","Data":"a24cec6047466b0e0508f77e67df55a7d0a169c038d3fe9660ea8cba6ba64597"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.569521 4799 generic.go:334] "Generic (PLEG): container finished" podID="6640870e-ae01-4993-932a-f688af106bc8" containerID="64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e" exitCode=0 Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.569581 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" event={"ID":"6640870e-ae01-4993-932a-f688af106bc8","Type":"ContainerDied","Data":"64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.569604 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" event={"ID":"6640870e-ae01-4993-932a-f688af106bc8","Type":"ContainerDied","Data":"0ba581f215a076752503bfbc024ee0ccc10654a1872c226290797cd3c85d8e37"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.569628 4799 scope.go:117] "RemoveContainer" containerID="64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.569757 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55c5f675c5-s4zw9" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.581339 4799 generic.go:334] "Generic (PLEG): container finished" podID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" containerID="44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250" exitCode=0 Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.581905 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" event={"ID":"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c","Type":"ContainerDied","Data":"44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.581962 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" event={"ID":"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c","Type":"ContainerDied","Data":"b198a4de47de23389a4d8ae9e630e69e2d4158345e43d3922b11103f88730af8"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.582084 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.595196 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fs5dc" event={"ID":"3c8b6238-00b9-48d2-b1f5-4375b0555da6","Type":"ContainerStarted","Data":"62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.596619 4799 scope.go:117] "RemoveContainer" containerID="64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e" Feb 16 12:34:53 crc kubenswrapper[4799]: E0216 12:34:53.596989 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e\": container with ID starting with 64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e not found: ID does not exist" containerID="64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.597024 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e"} err="failed to get container status \"64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e\": rpc error: code = NotFound desc = could not find container \"64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e\": container with ID starting with 64923f977a3ee978c66ac2627f07de0ffc9edab9dc71667697db1f46730a158e not found: ID does not exist" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.597076 4799 scope.go:117] "RemoveContainer" containerID="44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.614834 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtx46" podStartSLOduration=3.046460752 podStartE2EDuration="39.614811383s" podCreationTimestamp="2026-02-16 12:34:14 +0000 UTC" firstStartedPulling="2026-02-16 12:34:16.540503039 +0000 UTC m=+162.133518373" lastFinishedPulling="2026-02-16 12:34:53.10885367 +0000 UTC m=+198.701869004" observedRunningTime="2026-02-16 12:34:53.612597502 +0000 UTC m=+199.205612836" watchObservedRunningTime="2026-02-16 12:34:53.614811383 +0000 UTC m=+199.207826717" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.616624 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p7r7" event={"ID":"8c24a8cb-4a90-46d9-a128-64c6b00fa185","Type":"ContainerStarted","Data":"77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e"} Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.632077 4799 scope.go:117] "RemoveContainer" containerID="44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.632856 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-proxy-ca-bundles\") pod \"6640870e-ae01-4993-932a-f688af106bc8\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.632910 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzzc\" (UniqueName: \"kubernetes.io/projected/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-kube-api-access-xvzzc\") pod \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.632937 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6640870e-ae01-4993-932a-f688af106bc8-serving-cert\") pod \"6640870e-ae01-4993-932a-f688af106bc8\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.632963 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-serving-cert\") pod \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.633004 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-config\") pod \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.633059 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4r88\" (UniqueName: \"kubernetes.io/projected/6640870e-ae01-4993-932a-f688af106bc8-kube-api-access-g4r88\") pod \"6640870e-ae01-4993-932a-f688af106bc8\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.633095 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-client-ca\") pod \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\" (UID: \"3d84d339-7ef9-4117-b7cf-bf4e7849ec8c\") " Feb 16 12:34:53 crc kubenswrapper[4799]: E0216 12:34:53.633270 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250\": container with ID starting with 44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250 not found: ID does not exist" containerID="44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.633322 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250"} err="failed to get container status \"44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250\": rpc error: code = NotFound desc = could not find container \"44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250\": container with ID starting with 44d022a3efc83479da218d9a20453844c6d1124d3bd6137d5a62b8a2cb95d250 not found: ID does not exist" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.634599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6640870e-ae01-4993-932a-f688af106bc8" (UID: "6640870e-ae01-4993-932a-f688af106bc8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.634672 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-client-ca\") pod \"6640870e-ae01-4993-932a-f688af106bc8\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.634725 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-config\") pod \"6640870e-ae01-4993-932a-f688af106bc8\" (UID: \"6640870e-ae01-4993-932a-f688af106bc8\") " Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.634926 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.634987 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-config" (OuterVolumeSpecName: "config") pod "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" (UID: "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.635502 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "6640870e-ae01-4993-932a-f688af106bc8" (UID: "6640870e-ae01-4993-932a-f688af106bc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.635830 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" (UID: "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.637927 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-config" (OuterVolumeSpecName: "config") pod "6640870e-ae01-4993-932a-f688af106bc8" (UID: "6640870e-ae01-4993-932a-f688af106bc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.641245 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-kube-api-access-xvzzc" (OuterVolumeSpecName: "kube-api-access-xvzzc") pod "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" (UID: "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c"). InnerVolumeSpecName "kube-api-access-xvzzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.642068 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6640870e-ae01-4993-932a-f688af106bc8-kube-api-access-g4r88" (OuterVolumeSpecName: "kube-api-access-g4r88") pod "6640870e-ae01-4993-932a-f688af106bc8" (UID: "6640870e-ae01-4993-932a-f688af106bc8"). InnerVolumeSpecName "kube-api-access-g4r88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.644292 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6640870e-ae01-4993-932a-f688af106bc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6640870e-ae01-4993-932a-f688af106bc8" (UID: "6640870e-ae01-4993-932a-f688af106bc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.645189 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" (UID: "3d84d339-7ef9-4117-b7cf-bf4e7849ec8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.703756 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7gh6f" podStartSLOduration=4.449257525 podStartE2EDuration="37.703725905s" podCreationTimestamp="2026-02-16 12:34:16 +0000 UTC" firstStartedPulling="2026-02-16 12:34:19.835957004 +0000 UTC m=+165.428972328" lastFinishedPulling="2026-02-16 12:34:53.090425334 +0000 UTC m=+198.683440708" observedRunningTime="2026-02-16 12:34:53.667850546 +0000 UTC m=+199.260865880" watchObservedRunningTime="2026-02-16 12:34:53.703725905 +0000 UTC m=+199.296741239" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.724523 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8p7r7" podStartSLOduration=4.223338444 podStartE2EDuration="36.724493166s" podCreationTimestamp="2026-02-16 12:34:17 +0000 UTC" firstStartedPulling="2026-02-16 12:34:19.778233694 +0000 UTC m=+165.371249028" lastFinishedPulling="2026-02-16 12:34:52.279388426 +0000 UTC m=+197.872403750" observedRunningTime="2026-02-16 12:34:53.709004176 +0000 UTC m=+199.302019500" watchObservedRunningTime="2026-02-16 12:34:53.724493166 +0000 UTC m=+199.317508500" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.735780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-client-ca\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.735877 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-config\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.735902 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2ac-856c-4169-81d6-cd350a5890de-serving-cert\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.735926 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7gp\" (UniqueName: \"kubernetes.io/projected/17c9d2ac-856c-4169-81d6-cd350a5890de-kube-api-access-cc7gp\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736013 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736025 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvzzc\" (UniqueName: \"kubernetes.io/projected/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-kube-api-access-xvzzc\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736035 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6640870e-ae01-4993-932a-f688af106bc8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736044 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736055 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736065 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4r88\" (UniqueName: \"kubernetes.io/projected/6640870e-ae01-4993-932a-f688af106bc8-kube-api-access-g4r88\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736074 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736083 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6640870e-ae01-4993-932a-f688af106bc8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.736628 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fs5dc" podStartSLOduration=3.079422932 podStartE2EDuration="39.736608927s" podCreationTimestamp="2026-02-16 12:34:14 +0000 UTC" firstStartedPulling="2026-02-16 12:34:16.529415356 +0000 UTC m=+162.122430690" lastFinishedPulling="2026-02-16 12:34:53.186601351 +0000 UTC m=+198.779616685" observedRunningTime="2026-02-16 12:34:53.735477451 +0000 UTC m=+199.328492785" watchObservedRunningTime="2026-02-16 12:34:53.736608927 +0000 UTC m=+199.329624261" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.837363 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-client-ca\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.837467 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-config\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.837503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2ac-856c-4169-81d6-cd350a5890de-serving-cert\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.837541 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7gp\" (UniqueName: \"kubernetes.io/projected/17c9d2ac-856c-4169-81d6-cd350a5890de-kube-api-access-cc7gp\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.838976 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-client-ca\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.840442 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-config\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.843383 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2ac-856c-4169-81d6-cd350a5890de-serving-cert\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.858857 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7gp\" (UniqueName: \"kubernetes.io/projected/17c9d2ac-856c-4169-81d6-cd350a5890de-kube-api-access-cc7gp\") pod \"route-controller-manager-859dbb45-gs2tk\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.891217 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.911573 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55c5f675c5-s4zw9"] Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.922352 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55c5f675c5-s4zw9"] Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.930447 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg"] Feb 16 12:34:53 crc kubenswrapper[4799]: I0216 12:34:53.933921 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ddcfd97cc-fv9rg"] Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.131430 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk"] Feb 16 12:34:54 crc kubenswrapper[4799]: W0216 12:34:54.139318 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c9d2ac_856c_4169_81d6_cd350a5890de.slice/crio-c6f3a7b0610e1c860820b5892116308f362c84f8dda1a79addc7c7278f1e4496 WatchSource:0}: Error finding container c6f3a7b0610e1c860820b5892116308f362c84f8dda1a79addc7c7278f1e4496: Status 404 returned error can't find the container with id c6f3a7b0610e1c860820b5892116308f362c84f8dda1a79addc7c7278f1e4496 Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.626106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" event={"ID":"17c9d2ac-856c-4169-81d6-cd350a5890de","Type":"ContainerStarted","Data":"92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223"} Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.626958 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.626982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" event={"ID":"17c9d2ac-856c-4169-81d6-cd350a5890de","Type":"ContainerStarted","Data":"c6f3a7b0610e1c860820b5892116308f362c84f8dda1a79addc7c7278f1e4496"} Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.629214 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerStarted","Data":"bdebac4d576fb26fe50d11a16cb7525aa312ee419bd43e0840f4df0981ebd221"} Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.659015 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.663076 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" podStartSLOduration=4.6630551019999995 podStartE2EDuration="4.663055102s" podCreationTimestamp="2026-02-16 12:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:54.660612614 +0000 UTC m=+200.253627968" watchObservedRunningTime="2026-02-16 12:34:54.663055102 +0000 UTC m=+200.256070436" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.687568 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jgm8v" podStartSLOduration=4.2775774460000004 podStartE2EDuration="37.687547504s" podCreationTimestamp="2026-02-16 12:34:17 +0000 UTC" firstStartedPulling="2026-02-16 12:34:19.784220098 +0000 UTC m=+165.377235432" lastFinishedPulling="2026-02-16 12:34:53.194190166 +0000 UTC m=+198.787205490" observedRunningTime="2026-02-16 12:34:54.682527512 +0000 UTC m=+200.275542846" watchObservedRunningTime="2026-02-16 12:34:54.687547504 +0000 UTC m=+200.280562838" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.800859 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.800928 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.929784 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:54 crc kubenswrapper[4799]: I0216 12:34:54.929880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:34:55 crc kubenswrapper[4799]: I0216 12:34:55.114220 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:55 crc kubenswrapper[4799]: I0216 12:34:55.114294 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:34:55 crc kubenswrapper[4799]: I0216 12:34:55.160273 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d84d339-7ef9-4117-b7cf-bf4e7849ec8c" path="/var/lib/kubelet/pods/3d84d339-7ef9-4117-b7cf-bf4e7849ec8c/volumes" Feb 16 12:34:55 crc kubenswrapper[4799]: I0216 12:34:55.160932 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6640870e-ae01-4993-932a-f688af106bc8" path="/var/lib/kubelet/pods/6640870e-ae01-4993-932a-f688af106bc8/volumes" Feb 16 12:34:55 crc kubenswrapper[4799]: I0216 12:34:55.919488 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fs5dc" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="registry-server" probeResult="failure" output=< Feb 16 12:34:55 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:34:55 crc kubenswrapper[4799]: > Feb 16 12:34:55 crc kubenswrapper[4799]: I0216 12:34:55.973718 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9xm7s" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="registry-server" probeResult="failure" output=< Feb 16 12:34:55 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:34:55 crc kubenswrapper[4799]: > Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.168879 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vtx46" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="registry-server" probeResult="failure" output=< Feb 16 12:34:56 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:34:56 crc kubenswrapper[4799]: > Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.247884 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f75f647f5-ddhrl"] Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.248913 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.251528 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.251563 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.251569 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.252046 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.252326 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.266355 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.266386 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.268184 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f75f647f5-ddhrl"] Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.284880 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-config\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.284963 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2955\" (UniqueName: \"kubernetes.io/projected/f89b2989-cb05-4401-a5a2-11e047187c23-kube-api-access-l2955\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.285003 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-client-ca\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.285042 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f89b2989-cb05-4401-a5a2-11e047187c23-serving-cert\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.285101 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-proxy-ca-bundles\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.385505 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-proxy-ca-bundles\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.385576 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-config\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.385620 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2955\" (UniqueName: \"kubernetes.io/projected/f89b2989-cb05-4401-a5a2-11e047187c23-kube-api-access-l2955\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.385654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-client-ca\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.385685 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f89b2989-cb05-4401-a5a2-11e047187c23-serving-cert\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.387208 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-client-ca\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.387481 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-config\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.388953 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-proxy-ca-bundles\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.400251 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f89b2989-cb05-4401-a5a2-11e047187c23-serving-cert\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.402543 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2955\" (UniqueName: \"kubernetes.io/projected/f89b2989-cb05-4401-a5a2-11e047187c23-kube-api-access-l2955\") pod \"controller-manager-f75f647f5-ddhrl\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.621489 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:56 crc kubenswrapper[4799]: I0216 12:34:56.866750 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f75f647f5-ddhrl"] Feb 16 12:34:56 crc kubenswrapper[4799]: W0216 12:34:56.867733 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf89b2989_cb05_4401_a5a2_11e047187c23.slice/crio-90c05ac594fc873aa57ae600fa92186f4fb7c6146770b49839fcf5c87f35974d WatchSource:0}: Error finding container 90c05ac594fc873aa57ae600fa92186f4fb7c6146770b49839fcf5c87f35974d: Status 404 returned error can't find the container with id 90c05ac594fc873aa57ae600fa92186f4fb7c6146770b49839fcf5c87f35974d Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.147763 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.149651 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.292337 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.653244 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" event={"ID":"f89b2989-cb05-4401-a5a2-11e047187c23","Type":"ContainerStarted","Data":"20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506"} Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.653304 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.653318 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" event={"ID":"f89b2989-cb05-4401-a5a2-11e047187c23","Type":"ContainerStarted","Data":"90c05ac594fc873aa57ae600fa92186f4fb7c6146770b49839fcf5c87f35974d"} Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.658499 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:34:57 crc kubenswrapper[4799]: I0216 12:34:57.669486 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" podStartSLOduration=7.669465986 podStartE2EDuration="7.669465986s" podCreationTimestamp="2026-02-16 12:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:34:57.668026089 +0000 UTC m=+203.261041423" watchObservedRunningTime="2026-02-16 12:34:57.669465986 +0000 UTC m=+203.262481320" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.050087 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.050728 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.319939 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.321465 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.373979 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.375117 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.377725 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.378002 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.387890 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.419353 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac80530-0089-47c9-880e-6a43c2889f19-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.419432 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac80530-0089-47c9-880e-6a43c2889f19-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.521298 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac80530-0089-47c9-880e-6a43c2889f19-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.521393 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac80530-0089-47c9-880e-6a43c2889f19-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.521462 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac80530-0089-47c9-880e-6a43c2889f19-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.547975 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac80530-0089-47c9-880e-6a43c2889f19-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:58 crc kubenswrapper[4799]: I0216 12:34:58.697144 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:34:59 crc kubenswrapper[4799]: I0216 12:34:59.100550 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jgm8v" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="registry-server" probeResult="failure" output=< Feb 16 12:34:59 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:34:59 crc kubenswrapper[4799]: > Feb 16 12:34:59 crc kubenswrapper[4799]: I0216 12:34:59.169794 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:34:59 crc kubenswrapper[4799]: W0216 12:34:59.175191 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaac80530_0089_47c9_880e_6a43c2889f19.slice/crio-7e3c886c322897ca696a99dcfb27c5a1edb69cd074cba0283b2dfce245f9f6aa WatchSource:0}: Error finding container 7e3c886c322897ca696a99dcfb27c5a1edb69cd074cba0283b2dfce245f9f6aa: Status 404 returned error can't find the container with id 7e3c886c322897ca696a99dcfb27c5a1edb69cd074cba0283b2dfce245f9f6aa Feb 16 12:34:59 crc kubenswrapper[4799]: I0216 12:34:59.362028 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8p7r7" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="registry-server" probeResult="failure" output=< Feb 16 12:34:59 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:34:59 crc kubenswrapper[4799]: > Feb 16 12:34:59 crc kubenswrapper[4799]: I0216 12:34:59.664736 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aac80530-0089-47c9-880e-6a43c2889f19","Type":"ContainerStarted","Data":"7e3c886c322897ca696a99dcfb27c5a1edb69cd074cba0283b2dfce245f9f6aa"} Feb 16 12:35:00 crc kubenswrapper[4799]: I0216 12:35:00.672517 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aac80530-0089-47c9-880e-6a43c2889f19","Type":"ContainerStarted","Data":"c54520da0a16ec0fd2efe0232d25dcc96442b59010d79e6aaf1b694becc249ad"} Feb 16 12:35:00 crc kubenswrapper[4799]: I0216 12:35:00.696780 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.696756052 podStartE2EDuration="2.696756052s" podCreationTimestamp="2026-02-16 12:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:35:00.692515515 +0000 UTC m=+206.285530849" watchObservedRunningTime="2026-02-16 12:35:00.696756052 +0000 UTC m=+206.289771386" Feb 16 12:35:01 crc kubenswrapper[4799]: I0216 12:35:01.680745 4799 generic.go:334] "Generic (PLEG): container finished" podID="aac80530-0089-47c9-880e-6a43c2889f19" containerID="c54520da0a16ec0fd2efe0232d25dcc96442b59010d79e6aaf1b694becc249ad" exitCode=0 Feb 16 12:35:01 crc kubenswrapper[4799]: I0216 12:35:01.680866 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aac80530-0089-47c9-880e-6a43c2889f19","Type":"ContainerDied","Data":"c54520da0a16ec0fd2efe0232d25dcc96442b59010d79e6aaf1b694becc249ad"} Feb 16 12:35:01 crc kubenswrapper[4799]: I0216 12:35:01.685254 4799 generic.go:334] "Generic (PLEG): container finished" podID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerID="28bf16b64351da15b7c9f7f73f8ef4a5b57c00ee224a9b8efe1e6cf110f560f6" exitCode=0 Feb 16 12:35:01 crc kubenswrapper[4799]: I0216 12:35:01.685293 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wfjv" event={"ID":"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d","Type":"ContainerDied","Data":"28bf16b64351da15b7c9f7f73f8ef4a5b57c00ee224a9b8efe1e6cf110f560f6"} Feb 16 12:35:02 crc kubenswrapper[4799]: I0216 12:35:02.695391 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wfjv" event={"ID":"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d","Type":"ContainerStarted","Data":"6467821647ebdb6b790e04c6d718aaa433ef1e6354d0e453bf6204b8083f34bd"} Feb 16 12:35:02 crc kubenswrapper[4799]: I0216 12:35:02.722859 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wfjv" podStartSLOduration=3.285223165 podStartE2EDuration="46.722839548s" podCreationTimestamp="2026-02-16 12:34:16 +0000 UTC" firstStartedPulling="2026-02-16 12:34:18.741027466 +0000 UTC m=+164.334042800" lastFinishedPulling="2026-02-16 12:35:02.178643849 +0000 UTC m=+207.771659183" observedRunningTime="2026-02-16 12:35:02.718930252 +0000 UTC m=+208.311945586" watchObservedRunningTime="2026-02-16 12:35:02.722839548 +0000 UTC m=+208.315854882" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.072212 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.101652 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac80530-0089-47c9-880e-6a43c2889f19-kube-api-access\") pod \"aac80530-0089-47c9-880e-6a43c2889f19\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.101705 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac80530-0089-47c9-880e-6a43c2889f19-kubelet-dir\") pod \"aac80530-0089-47c9-880e-6a43c2889f19\" (UID: \"aac80530-0089-47c9-880e-6a43c2889f19\") " Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.101810 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac80530-0089-47c9-880e-6a43c2889f19-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aac80530-0089-47c9-880e-6a43c2889f19" (UID: "aac80530-0089-47c9-880e-6a43c2889f19"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.102000 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aac80530-0089-47c9-880e-6a43c2889f19-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.108415 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac80530-0089-47c9-880e-6a43c2889f19-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aac80530-0089-47c9-880e-6a43c2889f19" (UID: "aac80530-0089-47c9-880e-6a43c2889f19"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.203334 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aac80530-0089-47c9-880e-6a43c2889f19-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.707384 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerID="04316b593a0200df9981b5e202dee5fe7157b9648155bdbf9219be861bcef04c" exitCode=0 Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.707533 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjtgr" event={"ID":"0b2108bc-d6b4-4de2-9163-f3d6714155b3","Type":"ContainerDied","Data":"04316b593a0200df9981b5e202dee5fe7157b9648155bdbf9219be861bcef04c"} Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.711053 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aac80530-0089-47c9-880e-6a43c2889f19","Type":"ContainerDied","Data":"7e3c886c322897ca696a99dcfb27c5a1edb69cd074cba0283b2dfce245f9f6aa"} Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.711095 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e3c886c322897ca696a99dcfb27c5a1edb69cd074cba0283b2dfce245f9f6aa" Feb 16 12:35:03 crc kubenswrapper[4799]: I0216 12:35:03.711201 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:35:04 crc kubenswrapper[4799]: I0216 12:35:04.720360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjtgr" event={"ID":"0b2108bc-d6b4-4de2-9163-f3d6714155b3","Type":"ContainerStarted","Data":"782917260b598e8fa581200df49de3a9262756d31602af8009c6d054ce2952a4"} Feb 16 12:35:04 crc kubenswrapper[4799]: I0216 12:35:04.863453 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:35:04 crc kubenswrapper[4799]: I0216 12:35:04.889063 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bjtgr" podStartSLOduration=3.201528498 podStartE2EDuration="50.889039061s" podCreationTimestamp="2026-02-16 12:34:14 +0000 UTC" firstStartedPulling="2026-02-16 12:34:16.466512645 +0000 UTC m=+162.059527969" lastFinishedPulling="2026-02-16 12:35:04.154023198 +0000 UTC m=+209.747038532" observedRunningTime="2026-02-16 12:35:04.73917279 +0000 UTC m=+210.332188124" watchObservedRunningTime="2026-02-16 12:35:04.889039061 +0000 UTC m=+210.482054395" Feb 16 12:35:04 crc kubenswrapper[4799]: I0216 12:35:04.908180 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.004390 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.065196 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.166605 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.209039 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.303251 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.303322 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.769371 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:35:05 crc kubenswrapper[4799]: E0216 12:35:05.769698 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac80530-0089-47c9-880e-6a43c2889f19" containerName="pruner" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.769714 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac80530-0089-47c9-880e-6a43c2889f19" containerName="pruner" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.769851 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac80530-0089-47c9-880e-6a43c2889f19" containerName="pruner" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.772931 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.782867 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.789031 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.808791 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.841913 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.842100 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-var-lock\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.842162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/159c1a7c-133c-47d5-990d-c0869b0eafa4-kube-api-access\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.942581 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/159c1a7c-133c-47d5-990d-c0869b0eafa4-kube-api-access\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.942631 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.942697 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-var-lock\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.942767 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-var-lock\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.943075 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:05 crc kubenswrapper[4799]: I0216 12:35:05.963972 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/159c1a7c-133c-47d5-990d-c0869b0eafa4-kube-api-access\") pod \"installer-9-crc\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.098289 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.341739 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bjtgr" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="registry-server" probeResult="failure" output=< Feb 16 12:35:06 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:35:06 crc kubenswrapper[4799]: > Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.569083 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:35:06 crc kubenswrapper[4799]: W0216 12:35:06.586708 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod159c1a7c_133c_47d5_990d_c0869b0eafa4.slice/crio-4ceb39162227c5cf6447913a8b79c2e5bc8eed7aba1bb5863839ff9a8f39b29d WatchSource:0}: Error finding container 4ceb39162227c5cf6447913a8b79c2e5bc8eed7aba1bb5863839ff9a8f39b29d: Status 404 returned error can't find the container with id 4ceb39162227c5cf6447913a8b79c2e5bc8eed7aba1bb5863839ff9a8f39b29d Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.740814 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"159c1a7c-133c-47d5-990d-c0869b0eafa4","Type":"ContainerStarted","Data":"4ceb39162227c5cf6447913a8b79c2e5bc8eed7aba1bb5863839ff9a8f39b29d"} Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.767761 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.767859 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:35:06 crc kubenswrapper[4799]: I0216 12:35:06.834290 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.187951 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtx46"] Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.188310 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtx46" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="registry-server" containerID="cri-o://a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79" gracePeriod=2 Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.215490 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.697917 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.751736 4799 generic.go:334] "Generic (PLEG): container finished" podID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerID="a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79" exitCode=0 Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.751824 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtx46" event={"ID":"9501d397-7cf8-4712-b7bf-be0fc0c5eca4","Type":"ContainerDied","Data":"a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79"} Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.751856 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtx46" event={"ID":"9501d397-7cf8-4712-b7bf-be0fc0c5eca4","Type":"ContainerDied","Data":"4a8e9452e36f4d4e8700f1770befb3aab745d283ad43c75fb78c53fdccce46ea"} Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.751878 4799 scope.go:117] "RemoveContainer" containerID="a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.752026 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtx46" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.757254 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"159c1a7c-133c-47d5-990d-c0869b0eafa4","Type":"ContainerStarted","Data":"17894eb17e3b317c8d08f87cc926b57a6a500c394250588828a0e0ccd6d2f790"} Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.770352 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-catalog-content\") pod \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.770487 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-utilities\") pod \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.770641 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvlh\" (UniqueName: \"kubernetes.io/projected/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-kube-api-access-sfvlh\") pod \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\" (UID: \"9501d397-7cf8-4712-b7bf-be0fc0c5eca4\") " Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.771414 4799 scope.go:117] "RemoveContainer" containerID="75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.773725 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-utilities" (OuterVolumeSpecName: "utilities") pod "9501d397-7cf8-4712-b7bf-be0fc0c5eca4" (UID: "9501d397-7cf8-4712-b7bf-be0fc0c5eca4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.783242 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.7832237380000002 podStartE2EDuration="2.783223738s" podCreationTimestamp="2026-02-16 12:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:35:07.780960314 +0000 UTC m=+213.373975648" watchObservedRunningTime="2026-02-16 12:35:07.783223738 +0000 UTC m=+213.376239072" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.786500 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-kube-api-access-sfvlh" (OuterVolumeSpecName: "kube-api-access-sfvlh") pod "9501d397-7cf8-4712-b7bf-be0fc0c5eca4" (UID: "9501d397-7cf8-4712-b7bf-be0fc0c5eca4"). InnerVolumeSpecName "kube-api-access-sfvlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.813330 4799 scope.go:117] "RemoveContainer" containerID="df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.830855 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.836004 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9501d397-7cf8-4712-b7bf-be0fc0c5eca4" (UID: "9501d397-7cf8-4712-b7bf-be0fc0c5eca4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.841853 4799 scope.go:117] "RemoveContainer" containerID="a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79" Feb 16 12:35:07 crc kubenswrapper[4799]: E0216 12:35:07.842615 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79\": container with ID starting with a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79 not found: ID does not exist" containerID="a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.842653 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79"} err="failed to get container status \"a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79\": rpc error: code = NotFound desc = could not find container \"a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79\": container with ID starting with a3a68fb57ae9faa2cec60a3ee7565ba166502780214890a3006ab3062821ec79 not found: ID does not exist" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.842683 4799 scope.go:117] "RemoveContainer" containerID="75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415" Feb 16 12:35:07 crc kubenswrapper[4799]: E0216 12:35:07.842974 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415\": container with ID starting with 75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415 not found: ID does not exist" containerID="75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.842996 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415"} err="failed to get container status \"75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415\": rpc error: code = NotFound desc = could not find container \"75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415\": container with ID starting with 75c662548e4a0e168723ee6be95e92bfd43cdb626fa3449bf70429de63706415 not found: ID does not exist" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.843010 4799 scope.go:117] "RemoveContainer" containerID="df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c" Feb 16 12:35:07 crc kubenswrapper[4799]: E0216 12:35:07.843350 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c\": container with ID starting with df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c not found: ID does not exist" containerID="df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.843373 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c"} err="failed to get container status \"df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c\": rpc error: code = NotFound desc = could not find container \"df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c\": container with ID starting with df6c5cb25bfde5c9a647a6ca1e9a194f28528664eb8e75f1463fe1a003dd326c not found: ID does not exist" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.872417 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.872456 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvlh\" (UniqueName: \"kubernetes.io/projected/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-kube-api-access-sfvlh\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:07 crc kubenswrapper[4799]: I0216 12:35:07.872465 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9501d397-7cf8-4712-b7bf-be0fc0c5eca4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:08 crc kubenswrapper[4799]: I0216 12:35:08.084524 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtx46"] Feb 16 12:35:08 crc kubenswrapper[4799]: I0216 12:35:08.092679 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtx46"] Feb 16 12:35:08 crc kubenswrapper[4799]: I0216 12:35:08.096691 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:35:08 crc kubenswrapper[4799]: I0216 12:35:08.152565 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:35:08 crc kubenswrapper[4799]: I0216 12:35:08.372451 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:35:08 crc kubenswrapper[4799]: I0216 12:35:08.434927 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:35:09 crc kubenswrapper[4799]: I0216 12:35:09.166567 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" path="/var/lib/kubelet/pods/9501d397-7cf8-4712-b7bf-be0fc0c5eca4/volumes" Feb 16 12:35:09 crc kubenswrapper[4799]: I0216 12:35:09.385550 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gh6f"] Feb 16 12:35:09 crc kubenswrapper[4799]: I0216 12:35:09.385818 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7gh6f" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="registry-server" containerID="cri-o://a24cec6047466b0e0508f77e67df55a7d0a169c038d3fe9660ea8cba6ba64597" gracePeriod=2 Feb 16 12:35:09 crc kubenswrapper[4799]: I0216 12:35:09.784720 4799 generic.go:334] "Generic (PLEG): container finished" podID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerID="a24cec6047466b0e0508f77e67df55a7d0a169c038d3fe9660ea8cba6ba64597" exitCode=0 Feb 16 12:35:09 crc kubenswrapper[4799]: I0216 12:35:09.784805 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerDied","Data":"a24cec6047466b0e0508f77e67df55a7d0a169c038d3fe9660ea8cba6ba64597"} Feb 16 12:35:09 crc kubenswrapper[4799]: I0216 12:35:09.947441 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.030409 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhph4\" (UniqueName: \"kubernetes.io/projected/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-kube-api-access-mhph4\") pod \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.030468 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-utilities\") pod \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.030695 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-catalog-content\") pod \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\" (UID: \"694320f7-5f83-4b9c-9995-6ec38f6ee4cb\") " Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.031488 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-utilities" (OuterVolumeSpecName: "utilities") pod "694320f7-5f83-4b9c-9995-6ec38f6ee4cb" (UID: "694320f7-5f83-4b9c-9995-6ec38f6ee4cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.038834 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-kube-api-access-mhph4" (OuterVolumeSpecName: "kube-api-access-mhph4") pod "694320f7-5f83-4b9c-9995-6ec38f6ee4cb" (UID: "694320f7-5f83-4b9c-9995-6ec38f6ee4cb"). InnerVolumeSpecName "kube-api-access-mhph4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.070525 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "694320f7-5f83-4b9c-9995-6ec38f6ee4cb" (UID: "694320f7-5f83-4b9c-9995-6ec38f6ee4cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.137157 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.137217 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhph4\" (UniqueName: \"kubernetes.io/projected/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-kube-api-access-mhph4\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.137235 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/694320f7-5f83-4b9c-9995-6ec38f6ee4cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.795103 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gh6f" event={"ID":"694320f7-5f83-4b9c-9995-6ec38f6ee4cb","Type":"ContainerDied","Data":"c081e91ae5539389bc5835ebbb16d3755fb335e320b26524c0a138ea0b26ab69"} Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.795189 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gh6f" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.795261 4799 scope.go:117] "RemoveContainer" containerID="a24cec6047466b0e0508f77e67df55a7d0a169c038d3fe9660ea8cba6ba64597" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.818744 4799 scope.go:117] "RemoveContainer" containerID="5e18099c8c3de77a6782204e86cca749d55f4559b19474f5729ad59391803fdd" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.847560 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gh6f"] Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.854695 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gh6f"] Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.873570 4799 scope.go:117] "RemoveContainer" containerID="e6089210bce3de291b99e4495d9b6ba2ca1a4f04578c729a3c8d2ddc55ac8ea7" Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.932037 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f75f647f5-ddhrl"] Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.932402 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" podUID="f89b2989-cb05-4401-a5a2-11e047187c23" containerName="controller-manager" containerID="cri-o://20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506" gracePeriod=30 Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.948249 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk"] Feb 16 12:35:10 crc kubenswrapper[4799]: I0216 12:35:10.948657 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" podUID="17c9d2ac-856c-4169-81d6-cd350a5890de" containerName="route-controller-manager" containerID="cri-o://92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223" gracePeriod=30 Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.184957 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" path="/var/lib/kubelet/pods/694320f7-5f83-4b9c-9995-6ec38f6ee4cb/volumes" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.505683 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.555086 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.573149 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-client-ca\") pod \"17c9d2ac-856c-4169-81d6-cd350a5890de\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.573239 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-config\") pod \"17c9d2ac-856c-4169-81d6-cd350a5890de\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.573269 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2ac-856c-4169-81d6-cd350a5890de-serving-cert\") pod \"17c9d2ac-856c-4169-81d6-cd350a5890de\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.573337 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7gp\" (UniqueName: \"kubernetes.io/projected/17c9d2ac-856c-4169-81d6-cd350a5890de-kube-api-access-cc7gp\") pod \"17c9d2ac-856c-4169-81d6-cd350a5890de\" (UID: \"17c9d2ac-856c-4169-81d6-cd350a5890de\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.574616 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-client-ca" (OuterVolumeSpecName: "client-ca") pod "17c9d2ac-856c-4169-81d6-cd350a5890de" (UID: "17c9d2ac-856c-4169-81d6-cd350a5890de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.575579 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-config" (OuterVolumeSpecName: "config") pod "17c9d2ac-856c-4169-81d6-cd350a5890de" (UID: "17c9d2ac-856c-4169-81d6-cd350a5890de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.579357 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c9d2ac-856c-4169-81d6-cd350a5890de-kube-api-access-cc7gp" (OuterVolumeSpecName: "kube-api-access-cc7gp") pod "17c9d2ac-856c-4169-81d6-cd350a5890de" (UID: "17c9d2ac-856c-4169-81d6-cd350a5890de"). InnerVolumeSpecName "kube-api-access-cc7gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.579412 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c9d2ac-856c-4169-81d6-cd350a5890de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17c9d2ac-856c-4169-81d6-cd350a5890de" (UID: "17c9d2ac-856c-4169-81d6-cd350a5890de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.674465 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-client-ca\") pod \"f89b2989-cb05-4401-a5a2-11e047187c23\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.674595 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2955\" (UniqueName: \"kubernetes.io/projected/f89b2989-cb05-4401-a5a2-11e047187c23-kube-api-access-l2955\") pod \"f89b2989-cb05-4401-a5a2-11e047187c23\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.674678 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-config\") pod \"f89b2989-cb05-4401-a5a2-11e047187c23\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.674713 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-proxy-ca-bundles\") pod \"f89b2989-cb05-4401-a5a2-11e047187c23\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.674763 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f89b2989-cb05-4401-a5a2-11e047187c23-serving-cert\") pod \"f89b2989-cb05-4401-a5a2-11e047187c23\" (UID: \"f89b2989-cb05-4401-a5a2-11e047187c23\") " Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675024 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675049 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9d2ac-856c-4169-81d6-cd350a5890de-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675058 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17c9d2ac-856c-4169-81d6-cd350a5890de-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675069 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc7gp\" (UniqueName: \"kubernetes.io/projected/17c9d2ac-856c-4169-81d6-cd350a5890de-kube-api-access-cc7gp\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f89b2989-cb05-4401-a5a2-11e047187c23" (UID: "f89b2989-cb05-4401-a5a2-11e047187c23"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675765 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-client-ca" (OuterVolumeSpecName: "client-ca") pod "f89b2989-cb05-4401-a5a2-11e047187c23" (UID: "f89b2989-cb05-4401-a5a2-11e047187c23"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.675795 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-config" (OuterVolumeSpecName: "config") pod "f89b2989-cb05-4401-a5a2-11e047187c23" (UID: "f89b2989-cb05-4401-a5a2-11e047187c23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.678007 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89b2989-cb05-4401-a5a2-11e047187c23-kube-api-access-l2955" (OuterVolumeSpecName: "kube-api-access-l2955") pod "f89b2989-cb05-4401-a5a2-11e047187c23" (UID: "f89b2989-cb05-4401-a5a2-11e047187c23"). InnerVolumeSpecName "kube-api-access-l2955". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.678177 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89b2989-cb05-4401-a5a2-11e047187c23-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f89b2989-cb05-4401-a5a2-11e047187c23" (UID: "f89b2989-cb05-4401-a5a2-11e047187c23"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.776160 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2955\" (UniqueName: \"kubernetes.io/projected/f89b2989-cb05-4401-a5a2-11e047187c23-kube-api-access-l2955\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.776230 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.776247 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.776258 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f89b2989-cb05-4401-a5a2-11e047187c23-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.776268 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f89b2989-cb05-4401-a5a2-11e047187c23-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.783577 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8p7r7"] Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.783913 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8p7r7" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="registry-server" containerID="cri-o://77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e" gracePeriod=2 Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.804269 4799 generic.go:334] "Generic (PLEG): container finished" podID="17c9d2ac-856c-4169-81d6-cd350a5890de" containerID="92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223" exitCode=0 Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.804332 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" event={"ID":"17c9d2ac-856c-4169-81d6-cd350a5890de","Type":"ContainerDied","Data":"92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223"} Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.804419 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" event={"ID":"17c9d2ac-856c-4169-81d6-cd350a5890de","Type":"ContainerDied","Data":"c6f3a7b0610e1c860820b5892116308f362c84f8dda1a79addc7c7278f1e4496"} Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.804371 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.804464 4799 scope.go:117] "RemoveContainer" containerID="92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.808911 4799 generic.go:334] "Generic (PLEG): container finished" podID="f89b2989-cb05-4401-a5a2-11e047187c23" containerID="20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506" exitCode=0 Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.808985 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.808998 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" event={"ID":"f89b2989-cb05-4401-a5a2-11e047187c23","Type":"ContainerDied","Data":"20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506"} Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.809039 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f75f647f5-ddhrl" event={"ID":"f89b2989-cb05-4401-a5a2-11e047187c23","Type":"ContainerDied","Data":"90c05ac594fc873aa57ae600fa92186f4fb7c6146770b49839fcf5c87f35974d"} Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.827087 4799 scope.go:117] "RemoveContainer" containerID="92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223" Feb 16 12:35:11 crc kubenswrapper[4799]: E0216 12:35:11.828021 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223\": container with ID starting with 92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223 not found: ID does not exist" containerID="92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.828429 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223"} err="failed to get container status \"92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223\": rpc error: code = NotFound desc = could not find container \"92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223\": container with ID starting with 92a1f146783e8e275d3b50844252ccedf8c366c5ba7c3ecb02e3d27cb1cf7223 not found: ID does not exist" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.828634 4799 scope.go:117] "RemoveContainer" containerID="20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.851024 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk"] Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.857195 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb45-gs2tk"] Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.863637 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f75f647f5-ddhrl"] Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.874700 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f75f647f5-ddhrl"] Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.955962 4799 scope.go:117] "RemoveContainer" containerID="20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506" Feb 16 12:35:11 crc kubenswrapper[4799]: E0216 12:35:11.958693 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506\": container with ID starting with 20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506 not found: ID does not exist" containerID="20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506" Feb 16 12:35:11 crc kubenswrapper[4799]: I0216 12:35:11.958787 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506"} err="failed to get container status \"20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506\": rpc error: code = NotFound desc = could not find container \"20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506\": container with ID starting with 20699459144eacd64b1ea9d1026b9bf4f011f28129b9ca2e0fe01f9097fdd506 not found: ID does not exist" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.145717 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262103 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w"] Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262446 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="extract-content" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262459 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="extract-content" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262476 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="extract-content" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262485 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="extract-content" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262498 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262505 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262516 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="extract-utilities" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262522 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="extract-utilities" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262531 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89b2989-cb05-4401-a5a2-11e047187c23" containerName="controller-manager" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262538 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89b2989-cb05-4401-a5a2-11e047187c23" containerName="controller-manager" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262550 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262556 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262567 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c9d2ac-856c-4169-81d6-cd350a5890de" containerName="route-controller-manager" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262573 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c9d2ac-856c-4169-81d6-cd350a5890de" containerName="route-controller-manager" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262582 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="extract-content" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262588 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="extract-content" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262599 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="extract-utilities" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262605 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="extract-utilities" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262613 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="extract-utilities" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262619 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="extract-utilities" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.262633 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262641 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262761 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c9d2ac-856c-4169-81d6-cd350a5890de" containerName="route-controller-manager" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262775 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="694320f7-5f83-4b9c-9995-6ec38f6ee4cb" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262784 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89b2989-cb05-4401-a5a2-11e047187c23" containerName="controller-manager" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262792 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9501d397-7cf8-4712-b7bf-be0fc0c5eca4" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.262803 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerName="registry-server" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.263266 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.263784 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb"] Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.266800 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.267275 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.267431 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.267614 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.269230 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.269690 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.275499 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb"] Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.314818 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-utilities\") pod \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.314925 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-catalog-content\") pod \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.314989 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4m8p\" (UniqueName: \"kubernetes.io/projected/8c24a8cb-4a90-46d9-a128-64c6b00fa185-kube-api-access-p4m8p\") pod \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\" (UID: \"8c24a8cb-4a90-46d9-a128-64c6b00fa185\") " Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.315472 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.315743 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.317474 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-utilities" (OuterVolumeSpecName: "utilities") pod "8c24a8cb-4a90-46d9-a128-64c6b00fa185" (UID: "8c24a8cb-4a90-46d9-a128-64c6b00fa185"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.317541 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.318823 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.319039 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.319717 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.319895 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.323019 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w"] Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.325510 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c24a8cb-4a90-46d9-a128-64c6b00fa185-kube-api-access-p4m8p" (OuterVolumeSpecName: "kube-api-access-p4m8p") pod "8c24a8cb-4a90-46d9-a128-64c6b00fa185" (UID: "8c24a8cb-4a90-46d9-a128-64c6b00fa185"). InnerVolumeSpecName "kube-api-access-p4m8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.325530 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.416969 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vb59\" (UniqueName: \"kubernetes.io/projected/d6149f12-b635-4168-9ccc-a3d6e424f325-kube-api-access-7vb59\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417379 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-client-ca\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6149f12-b635-4168-9ccc-a3d6e424f325-serving-cert\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-config\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417477 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-client-ca\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417495 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-proxy-ca-bundles\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417520 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c39e15-aa3e-4a84-b0d3-d394643e6778-serving-cert\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417546 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-config\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417573 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zsn9\" (UniqueName: \"kubernetes.io/projected/f7c39e15-aa3e-4a84-b0d3-d394643e6778-kube-api-access-9zsn9\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417611 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4m8p\" (UniqueName: \"kubernetes.io/projected/8c24a8cb-4a90-46d9-a128-64c6b00fa185-kube-api-access-p4m8p\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.417622 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.466414 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c24a8cb-4a90-46d9-a128-64c6b00fa185" (UID: "8c24a8cb-4a90-46d9-a128-64c6b00fa185"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.519178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-proxy-ca-bundles\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.519262 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c39e15-aa3e-4a84-b0d3-d394643e6778-serving-cert\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.519317 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-config\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.519350 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zsn9\" (UniqueName: \"kubernetes.io/projected/f7c39e15-aa3e-4a84-b0d3-d394643e6778-kube-api-access-9zsn9\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.519448 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vb59\" (UniqueName: \"kubernetes.io/projected/d6149f12-b635-4168-9ccc-a3d6e424f325-kube-api-access-7vb59\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.519961 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-client-ca\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.520000 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6149f12-b635-4168-9ccc-a3d6e424f325-serving-cert\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.520038 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-config\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.520091 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-client-ca\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.520177 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c24a8cb-4a90-46d9-a128-64c6b00fa185-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.521488 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-client-ca\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.522185 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-config\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.522381 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-client-ca\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.522875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-config\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.523494 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-proxy-ca-bundles\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.524386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c39e15-aa3e-4a84-b0d3-d394643e6778-serving-cert\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.532699 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6149f12-b635-4168-9ccc-a3d6e424f325-serving-cert\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.541980 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vb59\" (UniqueName: \"kubernetes.io/projected/d6149f12-b635-4168-9ccc-a3d6e424f325-kube-api-access-7vb59\") pod \"controller-manager-6cd9ff496d-dpk9w\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.542989 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zsn9\" (UniqueName: \"kubernetes.io/projected/f7c39e15-aa3e-4a84-b0d3-d394643e6778-kube-api-access-9zsn9\") pod \"route-controller-manager-66d9555f7d-fsskb\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.657819 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.671636 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.827748 4799 generic.go:334] "Generic (PLEG): container finished" podID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" containerID="77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e" exitCode=0 Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.827853 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p7r7" event={"ID":"8c24a8cb-4a90-46d9-a128-64c6b00fa185","Type":"ContainerDied","Data":"77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e"} Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.827901 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p7r7" event={"ID":"8c24a8cb-4a90-46d9-a128-64c6b00fa185","Type":"ContainerDied","Data":"8ea488d0c7747f7cb18d59e1c9d99ae36ae54bca2eec8f2f7fd46e9f25ac245b"} Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.827936 4799 scope.go:117] "RemoveContainer" containerID="77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.828115 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p7r7" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.849964 4799 scope.go:117] "RemoveContainer" containerID="c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.895994 4799 scope.go:117] "RemoveContainer" containerID="4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.927382 4799 scope.go:117] "RemoveContainer" containerID="77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.928800 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8p7r7"] Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.930478 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e\": container with ID starting with 77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e not found: ID does not exist" containerID="77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.930521 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e"} err="failed to get container status \"77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e\": rpc error: code = NotFound desc = could not find container \"77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e\": container with ID starting with 77ba1308b18d059bc844f1be290149387c9e614cee3c13942dc8b0abfbefb70e not found: ID does not exist" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.930564 4799 scope.go:117] "RemoveContainer" containerID="c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.931359 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12\": container with ID starting with c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12 not found: ID does not exist" containerID="c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.931421 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12"} err="failed to get container status \"c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12\": rpc error: code = NotFound desc = could not find container \"c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12\": container with ID starting with c0d22e562b02e9f9e85760a0a22dd68bbaab98a9ae4c3dfbb504b5dc3ccb2b12 not found: ID does not exist" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.931469 4799 scope.go:117] "RemoveContainer" containerID="4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e" Feb 16 12:35:12 crc kubenswrapper[4799]: E0216 12:35:12.931949 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e\": container with ID starting with 4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e not found: ID does not exist" containerID="4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.931992 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e"} err="failed to get container status \"4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e\": rpc error: code = NotFound desc = could not find container \"4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e\": container with ID starting with 4f1a759cd8f40a1956c4cb5593a2ecaacb036357c0abff9fe98f9b85ffa8d85e not found: ID does not exist" Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.932199 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8p7r7"] Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.943532 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w"] Feb 16 12:35:12 crc kubenswrapper[4799]: I0216 12:35:12.987039 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb"] Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.160629 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c9d2ac-856c-4169-81d6-cd350a5890de" path="/var/lib/kubelet/pods/17c9d2ac-856c-4169-81d6-cd350a5890de/volumes" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.162376 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c24a8cb-4a90-46d9-a128-64c6b00fa185" path="/var/lib/kubelet/pods/8c24a8cb-4a90-46d9-a128-64c6b00fa185/volumes" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.163842 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89b2989-cb05-4401-a5a2-11e047187c23" path="/var/lib/kubelet/pods/f89b2989-cb05-4401-a5a2-11e047187c23/volumes" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.852341 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" event={"ID":"d6149f12-b635-4168-9ccc-a3d6e424f325","Type":"ContainerStarted","Data":"ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b"} Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.853555 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" event={"ID":"d6149f12-b635-4168-9ccc-a3d6e424f325","Type":"ContainerStarted","Data":"17e640812fe6dda11528bcaf95c997baedf8c9cf9f37fb77e3d6c4e418f13068"} Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.853600 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.856706 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" event={"ID":"f7c39e15-aa3e-4a84-b0d3-d394643e6778","Type":"ContainerStarted","Data":"a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c"} Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.856896 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" event={"ID":"f7c39e15-aa3e-4a84-b0d3-d394643e6778","Type":"ContainerStarted","Data":"630b2bf232a4ad4a3f63455becfe89bbabff844601cc07dfa314528b778da3a0"} Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.857562 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.862254 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.871767 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:13 crc kubenswrapper[4799]: I0216 12:35:13.878566 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" podStartSLOduration=3.878526436 podStartE2EDuration="3.878526436s" podCreationTimestamp="2026-02-16 12:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:35:13.87832672 +0000 UTC m=+219.471342074" watchObservedRunningTime="2026-02-16 12:35:13.878526436 +0000 UTC m=+219.471541770" Feb 16 12:35:15 crc kubenswrapper[4799]: I0216 12:35:15.373502 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:35:15 crc kubenswrapper[4799]: I0216 12:35:15.410535 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" podStartSLOduration=4.410507853 podStartE2EDuration="4.410507853s" podCreationTimestamp="2026-02-16 12:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:35:13.932378186 +0000 UTC m=+219.525393530" watchObservedRunningTime="2026-02-16 12:35:15.410507853 +0000 UTC m=+221.003523227" Feb 16 12:35:15 crc kubenswrapper[4799]: I0216 12:35:15.437307 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:35:17 crc kubenswrapper[4799]: I0216 12:35:17.585658 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjtgr"] Feb 16 12:35:17 crc kubenswrapper[4799]: I0216 12:35:17.586094 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bjtgr" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="registry-server" containerID="cri-o://782917260b598e8fa581200df49de3a9262756d31602af8009c6d054ce2952a4" gracePeriod=2 Feb 16 12:35:17 crc kubenswrapper[4799]: I0216 12:35:17.892854 4799 generic.go:334] "Generic (PLEG): container finished" podID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerID="782917260b598e8fa581200df49de3a9262756d31602af8009c6d054ce2952a4" exitCode=0 Feb 16 12:35:17 crc kubenswrapper[4799]: I0216 12:35:17.893173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjtgr" event={"ID":"0b2108bc-d6b4-4de2-9163-f3d6714155b3","Type":"ContainerDied","Data":"782917260b598e8fa581200df49de3a9262756d31602af8009c6d054ce2952a4"} Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.019222 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.177454 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-utilities\") pod \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.177528 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-catalog-content\") pod \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.177611 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnhfl\" (UniqueName: \"kubernetes.io/projected/0b2108bc-d6b4-4de2-9163-f3d6714155b3-kube-api-access-nnhfl\") pod \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\" (UID: \"0b2108bc-d6b4-4de2-9163-f3d6714155b3\") " Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.179369 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-utilities" (OuterVolumeSpecName: "utilities") pod "0b2108bc-d6b4-4de2-9163-f3d6714155b3" (UID: "0b2108bc-d6b4-4de2-9163-f3d6714155b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.185413 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2108bc-d6b4-4de2-9163-f3d6714155b3-kube-api-access-nnhfl" (OuterVolumeSpecName: "kube-api-access-nnhfl") pod "0b2108bc-d6b4-4de2-9163-f3d6714155b3" (UID: "0b2108bc-d6b4-4de2-9163-f3d6714155b3"). InnerVolumeSpecName "kube-api-access-nnhfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.254989 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b2108bc-d6b4-4de2-9163-f3d6714155b3" (UID: "0b2108bc-d6b4-4de2-9163-f3d6714155b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.279874 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.279907 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b2108bc-d6b4-4de2-9163-f3d6714155b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.279919 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnhfl\" (UniqueName: \"kubernetes.io/projected/0b2108bc-d6b4-4de2-9163-f3d6714155b3-kube-api-access-nnhfl\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.901698 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjtgr" event={"ID":"0b2108bc-d6b4-4de2-9163-f3d6714155b3","Type":"ContainerDied","Data":"24e8608df9d74d5f2c16cf2efa9af8c60e0eced2439b40e22c7959c84c88e7be"} Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.901840 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjtgr" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.902052 4799 scope.go:117] "RemoveContainer" containerID="782917260b598e8fa581200df49de3a9262756d31602af8009c6d054ce2952a4" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.924538 4799 scope.go:117] "RemoveContainer" containerID="04316b593a0200df9981b5e202dee5fe7157b9648155bdbf9219be861bcef04c" Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.935923 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjtgr"] Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.951972 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bjtgr"] Feb 16 12:35:18 crc kubenswrapper[4799]: I0216 12:35:18.954277 4799 scope.go:117] "RemoveContainer" containerID="6ce086a1221dac988b6bbc2dea1b0c85457f5e96a0193701021cac5ab0172fee" Feb 16 12:35:19 crc kubenswrapper[4799]: I0216 12:35:19.156329 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" path="/var/lib/kubelet/pods/0b2108bc-d6b4-4de2-9163-f3d6714155b3/volumes" Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.792829 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.793025 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.793206 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.794393 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.794520 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf" gracePeriod=600 Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.945201 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf" exitCode=0 Feb 16 12:35:21 crc kubenswrapper[4799]: I0216 12:35:21.945304 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf"} Feb 16 12:35:22 crc kubenswrapper[4799]: I0216 12:35:22.956249 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"99ae92538ccb5394a598414e9620dd6f3da82af389aa189751d9526a42ca1516"} Feb 16 12:35:26 crc kubenswrapper[4799]: I0216 12:35:26.128836 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sl8tw"] Feb 16 12:35:30 crc kubenswrapper[4799]: I0216 12:35:30.933254 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w"] Feb 16 12:35:30 crc kubenswrapper[4799]: I0216 12:35:30.934485 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" podUID="d6149f12-b635-4168-9ccc-a3d6e424f325" containerName="controller-manager" containerID="cri-o://ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b" gracePeriod=30 Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.020496 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb"] Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.020749 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" podUID="f7c39e15-aa3e-4a84-b0d3-d394643e6778" containerName="route-controller-manager" containerID="cri-o://a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c" gracePeriod=30 Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.541375 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.590786 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-client-ca\") pod \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.590909 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c39e15-aa3e-4a84-b0d3-d394643e6778-serving-cert\") pod \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.590998 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zsn9\" (UniqueName: \"kubernetes.io/projected/f7c39e15-aa3e-4a84-b0d3-d394643e6778-kube-api-access-9zsn9\") pod \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.591808 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-client-ca" (OuterVolumeSpecName: "client-ca") pod "f7c39e15-aa3e-4a84-b0d3-d394643e6778" (UID: "f7c39e15-aa3e-4a84-b0d3-d394643e6778"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.592245 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-config\") pod \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\" (UID: \"f7c39e15-aa3e-4a84-b0d3-d394643e6778\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.592634 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.592751 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-config" (OuterVolumeSpecName: "config") pod "f7c39e15-aa3e-4a84-b0d3-d394643e6778" (UID: "f7c39e15-aa3e-4a84-b0d3-d394643e6778"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.596728 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c39e15-aa3e-4a84-b0d3-d394643e6778-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f7c39e15-aa3e-4a84-b0d3-d394643e6778" (UID: "f7c39e15-aa3e-4a84-b0d3-d394643e6778"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.596852 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c39e15-aa3e-4a84-b0d3-d394643e6778-kube-api-access-9zsn9" (OuterVolumeSpecName: "kube-api-access-9zsn9") pod "f7c39e15-aa3e-4a84-b0d3-d394643e6778" (UID: "f7c39e15-aa3e-4a84-b0d3-d394643e6778"). InnerVolumeSpecName "kube-api-access-9zsn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.625333 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.693577 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-proxy-ca-bundles\") pod \"d6149f12-b635-4168-9ccc-a3d6e424f325\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.693659 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vb59\" (UniqueName: \"kubernetes.io/projected/d6149f12-b635-4168-9ccc-a3d6e424f325-kube-api-access-7vb59\") pod \"d6149f12-b635-4168-9ccc-a3d6e424f325\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.693757 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-config\") pod \"d6149f12-b635-4168-9ccc-a3d6e424f325\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.693799 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6149f12-b635-4168-9ccc-a3d6e424f325-serving-cert\") pod \"d6149f12-b635-4168-9ccc-a3d6e424f325\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.693828 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-client-ca\") pod \"d6149f12-b635-4168-9ccc-a3d6e424f325\" (UID: \"d6149f12-b635-4168-9ccc-a3d6e424f325\") " Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.694184 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7c39e15-aa3e-4a84-b0d3-d394643e6778-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.694204 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7c39e15-aa3e-4a84-b0d3-d394643e6778-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.694215 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zsn9\" (UniqueName: \"kubernetes.io/projected/f7c39e15-aa3e-4a84-b0d3-d394643e6778-kube-api-access-9zsn9\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.694577 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d6149f12-b635-4168-9ccc-a3d6e424f325" (UID: "d6149f12-b635-4168-9ccc-a3d6e424f325"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.694730 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-client-ca" (OuterVolumeSpecName: "client-ca") pod "d6149f12-b635-4168-9ccc-a3d6e424f325" (UID: "d6149f12-b635-4168-9ccc-a3d6e424f325"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.694768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-config" (OuterVolumeSpecName: "config") pod "d6149f12-b635-4168-9ccc-a3d6e424f325" (UID: "d6149f12-b635-4168-9ccc-a3d6e424f325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.696805 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6149f12-b635-4168-9ccc-a3d6e424f325-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d6149f12-b635-4168-9ccc-a3d6e424f325" (UID: "d6149f12-b635-4168-9ccc-a3d6e424f325"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.696837 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6149f12-b635-4168-9ccc-a3d6e424f325-kube-api-access-7vb59" (OuterVolumeSpecName: "kube-api-access-7vb59") pod "d6149f12-b635-4168-9ccc-a3d6e424f325" (UID: "d6149f12-b635-4168-9ccc-a3d6e424f325"). InnerVolumeSpecName "kube-api-access-7vb59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.795355 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.795402 4799 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6149f12-b635-4168-9ccc-a3d6e424f325-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.795413 4799 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.795422 4799 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6149f12-b635-4168-9ccc-a3d6e424f325-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:31 crc kubenswrapper[4799]: I0216 12:35:31.795432 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vb59\" (UniqueName: \"kubernetes.io/projected/d6149f12-b635-4168-9ccc-a3d6e424f325-kube-api-access-7vb59\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.028504 4799 generic.go:334] "Generic (PLEG): container finished" podID="d6149f12-b635-4168-9ccc-a3d6e424f325" containerID="ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b" exitCode=0 Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.028626 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.028630 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" event={"ID":"d6149f12-b635-4168-9ccc-a3d6e424f325","Type":"ContainerDied","Data":"ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b"} Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.028688 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w" event={"ID":"d6149f12-b635-4168-9ccc-a3d6e424f325","Type":"ContainerDied","Data":"17e640812fe6dda11528bcaf95c997baedf8c9cf9f37fb77e3d6c4e418f13068"} Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.028708 4799 scope.go:117] "RemoveContainer" containerID="ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.032213 4799 generic.go:334] "Generic (PLEG): container finished" podID="f7c39e15-aa3e-4a84-b0d3-d394643e6778" containerID="a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c" exitCode=0 Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.032275 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" event={"ID":"f7c39e15-aa3e-4a84-b0d3-d394643e6778","Type":"ContainerDied","Data":"a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c"} Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.032294 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.032321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb" event={"ID":"f7c39e15-aa3e-4a84-b0d3-d394643e6778","Type":"ContainerDied","Data":"630b2bf232a4ad4a3f63455becfe89bbabff844601cc07dfa314528b778da3a0"} Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.052373 4799 scope.go:117] "RemoveContainer" containerID="ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b" Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.053080 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b\": container with ID starting with ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b not found: ID does not exist" containerID="ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.053178 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b"} err="failed to get container status \"ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b\": rpc error: code = NotFound desc = could not find container \"ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b\": container with ID starting with ebd380a0cd27924d473da602a4d66272f32dcd9ffc48b225db3634894c52fc8b not found: ID does not exist" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.053216 4799 scope.go:117] "RemoveContainer" containerID="a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.066293 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.070860 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cd9ff496d-dpk9w"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.088569 4799 scope.go:117] "RemoveContainer" containerID="a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c" Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.088901 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c\": container with ID starting with a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c not found: ID does not exist" containerID="a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.088931 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c"} err="failed to get container status \"a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c\": rpc error: code = NotFound desc = could not find container \"a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c\": container with ID starting with a7a931b7e67b4a52176289ee591381ef71f71dd45690a1a2422fbe8db344bb1c not found: ID does not exist" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.090487 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.093246 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9555f7d-fsskb"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.274399 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf"] Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.274832 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="extract-content" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.274854 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="extract-content" Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.274877 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="registry-server" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.274890 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="registry-server" Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.274917 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c39e15-aa3e-4a84-b0d3-d394643e6778" containerName="route-controller-manager" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.274928 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c39e15-aa3e-4a84-b0d3-d394643e6778" containerName="route-controller-manager" Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.274948 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="extract-utilities" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.274959 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="extract-utilities" Feb 16 12:35:32 crc kubenswrapper[4799]: E0216 12:35:32.274982 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6149f12-b635-4168-9ccc-a3d6e424f325" containerName="controller-manager" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.274993 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6149f12-b635-4168-9ccc-a3d6e424f325" containerName="controller-manager" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.275209 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2108bc-d6b4-4de2-9163-f3d6714155b3" containerName="registry-server" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.275234 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6149f12-b635-4168-9ccc-a3d6e424f325" containerName="controller-manager" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.275314 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c39e15-aa3e-4a84-b0d3-d394643e6778" containerName="route-controller-manager" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.275982 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.278523 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.279616 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.281191 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.281537 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.281771 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.281984 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.282424 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.282461 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.283002 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.283375 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.283881 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.283986 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.284144 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.285750 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.290754 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.295625 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.298355 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf"] Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.301828 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-proxy-ca-bundles\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.301893 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33b9dc5-5e30-4b97-b878-b6c0710607b5-serving-cert\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.301923 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33b9dc5-5e30-4b97-b878-b6c0710607b5-config\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.301959 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbbf\" (UniqueName: \"kubernetes.io/projected/b33b9dc5-5e30-4b97-b878-b6c0710607b5-kube-api-access-qfbbf\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.301986 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d443757-2def-4b80-bf1e-a32139231d96-serving-cert\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.302031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-config\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.302066 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-client-ca\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.302116 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkl8\" (UniqueName: \"kubernetes.io/projected/5d443757-2def-4b80-bf1e-a32139231d96-kube-api-access-kpkl8\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.302202 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33b9dc5-5e30-4b97-b878-b6c0710607b5-client-ca\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403421 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-client-ca\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403498 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkl8\" (UniqueName: \"kubernetes.io/projected/5d443757-2def-4b80-bf1e-a32139231d96-kube-api-access-kpkl8\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403530 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33b9dc5-5e30-4b97-b878-b6c0710607b5-client-ca\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403560 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-proxy-ca-bundles\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403592 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33b9dc5-5e30-4b97-b878-b6c0710607b5-serving-cert\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403616 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33b9dc5-5e30-4b97-b878-b6c0710607b5-config\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403662 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbbf\" (UniqueName: \"kubernetes.io/projected/b33b9dc5-5e30-4b97-b878-b6c0710607b5-kube-api-access-qfbbf\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403698 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d443757-2def-4b80-bf1e-a32139231d96-serving-cert\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.403750 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-config\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.404849 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-client-ca\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.405440 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-proxy-ca-bundles\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.405725 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33b9dc5-5e30-4b97-b878-b6c0710607b5-config\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.406058 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d443757-2def-4b80-bf1e-a32139231d96-config\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.406637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b33b9dc5-5e30-4b97-b878-b6c0710607b5-client-ca\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.409006 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d443757-2def-4b80-bf1e-a32139231d96-serving-cert\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.419081 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b33b9dc5-5e30-4b97-b878-b6c0710607b5-serving-cert\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.431417 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbbf\" (UniqueName: \"kubernetes.io/projected/b33b9dc5-5e30-4b97-b878-b6c0710607b5-kube-api-access-qfbbf\") pod \"route-controller-manager-7b5fc7b649-x2qdc\" (UID: \"b33b9dc5-5e30-4b97-b878-b6c0710607b5\") " pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.437796 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkl8\" (UniqueName: \"kubernetes.io/projected/5d443757-2def-4b80-bf1e-a32139231d96-kube-api-access-kpkl8\") pod \"controller-manager-9c6dc7db9-2cbrf\" (UID: \"5d443757-2def-4b80-bf1e-a32139231d96\") " pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.604360 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.614251 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.816426 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc"] Feb 16 12:35:32 crc kubenswrapper[4799]: W0216 12:35:32.832510 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb33b9dc5_5e30_4b97_b878_b6c0710607b5.slice/crio-12cf94acf050270ae15d870464c864dd63570566b345d75be88cf30d3d080b2d WatchSource:0}: Error finding container 12cf94acf050270ae15d870464c864dd63570566b345d75be88cf30d3d080b2d: Status 404 returned error can't find the container with id 12cf94acf050270ae15d870464c864dd63570566b345d75be88cf30d3d080b2d Feb 16 12:35:32 crc kubenswrapper[4799]: I0216 12:35:32.855769 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf"] Feb 16 12:35:32 crc kubenswrapper[4799]: W0216 12:35:32.858726 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d443757_2def_4b80_bf1e_a32139231d96.slice/crio-1b7128d458463a111efcc867535ce16c9b66e75791e566ef9a8d6f91664bdf87 WatchSource:0}: Error finding container 1b7128d458463a111efcc867535ce16c9b66e75791e566ef9a8d6f91664bdf87: Status 404 returned error can't find the container with id 1b7128d458463a111efcc867535ce16c9b66e75791e566ef9a8d6f91664bdf87 Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.040450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" event={"ID":"b33b9dc5-5e30-4b97-b878-b6c0710607b5","Type":"ContainerStarted","Data":"9a6f1dece03755243486eea30aa7605741f20973872c0ad18026b47776c96b23"} Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.040754 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" event={"ID":"b33b9dc5-5e30-4b97-b878-b6c0710607b5","Type":"ContainerStarted","Data":"12cf94acf050270ae15d870464c864dd63570566b345d75be88cf30d3d080b2d"} Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.040906 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.044544 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" event={"ID":"5d443757-2def-4b80-bf1e-a32139231d96","Type":"ContainerStarted","Data":"e3c1f826d368171f84af4a5d06e4717a718c9f468c71bd7cde1ea7665938107e"} Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.044642 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.044686 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" event={"ID":"5d443757-2def-4b80-bf1e-a32139231d96","Type":"ContainerStarted","Data":"1b7128d458463a111efcc867535ce16c9b66e75791e566ef9a8d6f91664bdf87"} Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.049835 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.059863 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" podStartSLOduration=2.059842987 podStartE2EDuration="2.059842987s" podCreationTimestamp="2026-02-16 12:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:35:33.05745839 +0000 UTC m=+238.650473734" watchObservedRunningTime="2026-02-16 12:35:33.059842987 +0000 UTC m=+238.652858321" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.156224 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6149f12-b635-4168-9ccc-a3d6e424f325" path="/var/lib/kubelet/pods/d6149f12-b635-4168-9ccc-a3d6e424f325/volumes" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.156765 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c39e15-aa3e-4a84-b0d3-d394643e6778" path="/var/lib/kubelet/pods/f7c39e15-aa3e-4a84-b0d3-d394643e6778/volumes" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.439714 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b5fc7b649-x2qdc" Feb 16 12:35:33 crc kubenswrapper[4799]: I0216 12:35:33.466900 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9c6dc7db9-2cbrf" podStartSLOduration=3.466877775 podStartE2EDuration="3.466877775s" podCreationTimestamp="2026-02-16 12:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:35:33.087401707 +0000 UTC m=+238.680417041" watchObservedRunningTime="2026-02-16 12:35:33.466877775 +0000 UTC m=+239.059893109" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.907855 4799 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909404 4799 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909708 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e" gracePeriod=15 Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909834 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909853 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b" gracePeriod=15 Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909828 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe" gracePeriod=15 Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909920 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823" gracePeriod=15 Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.909866 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092" gracePeriod=15 Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.912114 4799 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.912864 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.912893 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.912916 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.912934 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.912959 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.912975 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.912999 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913015 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.913045 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913060 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.913085 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913101 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:35:44 crc kubenswrapper[4799]: E0216 12:35:44.913170 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913192 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913424 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913459 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913482 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913502 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913522 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.913540 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.919099 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.919168 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.919196 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.919224 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:44 crc kubenswrapper[4799]: I0216 12:35:44.919250 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.015884 4799 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.016017 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022560 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022655 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022817 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022847 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022925 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.022966 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.023081 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.023183 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.023250 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.023305 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.023363 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.123461 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.123521 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.123579 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.123656 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.123703 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.123728 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.134069 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.136284 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.137364 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092" exitCode=0 Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.137399 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b" exitCode=0 Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.137409 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe" exitCode=0 Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.137419 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823" exitCode=2 Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.137494 4799 scope.go:117] "RemoveContainer" containerID="6060b0a14bd816aac5f5b4376127723ed458abdc47092b920ddfaff970b95aae" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.139287 4799 generic.go:334] "Generic (PLEG): container finished" podID="159c1a7c-133c-47d5-990d-c0869b0eafa4" containerID="17894eb17e3b317c8d08f87cc926b57a6a500c394250588828a0e0ccd6d2f790" exitCode=0 Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.139338 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"159c1a7c-133c-47d5-990d-c0869b0eafa4","Type":"ContainerDied","Data":"17894eb17e3b317c8d08f87cc926b57a6a500c394250588828a0e0ccd6d2f790"} Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.140391 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.141226 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.152286 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.152554 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: E0216 12:35:45.994337 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: E0216 12:35:45.995282 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: E0216 12:35:45.995724 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: E0216 12:35:45.996270 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: E0216 12:35:45.996660 4799 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:45 crc kubenswrapper[4799]: I0216 12:35:45.996699 4799 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 12:35:45 crc kubenswrapper[4799]: E0216 12:35:45.997063 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.148808 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:35:46 crc kubenswrapper[4799]: E0216 12:35:46.198485 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.563485 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.564478 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:46 crc kubenswrapper[4799]: E0216 12:35:46.600330 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.651471 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-var-lock\") pod \"159c1a7c-133c-47d5-990d-c0869b0eafa4\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.651556 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-kubelet-dir\") pod \"159c1a7c-133c-47d5-990d-c0869b0eafa4\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.651652 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/159c1a7c-133c-47d5-990d-c0869b0eafa4-kube-api-access\") pod \"159c1a7c-133c-47d5-990d-c0869b0eafa4\" (UID: \"159c1a7c-133c-47d5-990d-c0869b0eafa4\") " Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.651784 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "159c1a7c-133c-47d5-990d-c0869b0eafa4" (UID: "159c1a7c-133c-47d5-990d-c0869b0eafa4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.651805 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-var-lock" (OuterVolumeSpecName: "var-lock") pod "159c1a7c-133c-47d5-990d-c0869b0eafa4" (UID: "159c1a7c-133c-47d5-990d-c0869b0eafa4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.652323 4799 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.652377 4799 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/159c1a7c-133c-47d5-990d-c0869b0eafa4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.660196 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159c1a7c-133c-47d5-990d-c0869b0eafa4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "159c1a7c-133c-47d5-990d-c0869b0eafa4" (UID: "159c1a7c-133c-47d5-990d-c0869b0eafa4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:46 crc kubenswrapper[4799]: I0216 12:35:46.753569 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/159c1a7c-133c-47d5-990d-c0869b0eafa4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.157633 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.158783 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"159c1a7c-133c-47d5-990d-c0869b0eafa4","Type":"ContainerDied","Data":"4ceb39162227c5cf6447913a8b79c2e5bc8eed7aba1bb5863839ff9a8f39b29d"} Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.158822 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ceb39162227c5cf6447913a8b79c2e5bc8eed7aba1bb5863839ff9a8f39b29d" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.176099 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.326357 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.327963 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.328881 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.329861 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.362981 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.363190 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.363300 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.363505 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.363623 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.363532 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.364187 4799 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.364373 4799 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:47 crc kubenswrapper[4799]: I0216 12:35:47.364511 4799 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:47 crc kubenswrapper[4799]: E0216 12:35:47.401925 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.170979 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.171976 4799 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e" exitCode=0 Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.172079 4799 scope.go:117] "RemoveContainer" containerID="5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.172184 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.191902 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.193437 4799 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.204143 4799 scope.go:117] "RemoveContainer" containerID="b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.222017 4799 scope.go:117] "RemoveContainer" containerID="4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.240548 4799 scope.go:117] "RemoveContainer" containerID="3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.269150 4799 scope.go:117] "RemoveContainer" containerID="f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.286270 4799 scope.go:117] "RemoveContainer" containerID="3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.314038 4799 scope.go:117] "RemoveContainer" containerID="5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092" Feb 16 12:35:48 crc kubenswrapper[4799]: E0216 12:35:48.314752 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\": container with ID starting with 5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092 not found: ID does not exist" containerID="5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.314820 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092"} err="failed to get container status \"5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\": rpc error: code = NotFound desc = could not find container \"5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092\": container with ID starting with 5173ad70b20122f9cd372f3225f61200111eb093b35cd5cadaf84493088be092 not found: ID does not exist" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.314865 4799 scope.go:117] "RemoveContainer" containerID="b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b" Feb 16 12:35:48 crc kubenswrapper[4799]: E0216 12:35:48.315417 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\": container with ID starting with b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b not found: ID does not exist" containerID="b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.315465 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b"} err="failed to get container status \"b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\": rpc error: code = NotFound desc = could not find container \"b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b\": container with ID starting with b8d899a02674a352c28ecf6a74ea79cb8ba7d5f0a3cd0649b6713f107623bf2b not found: ID does not exist" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.315508 4799 scope.go:117] "RemoveContainer" containerID="4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe" Feb 16 12:35:48 crc kubenswrapper[4799]: E0216 12:35:48.316344 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\": container with ID starting with 4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe not found: ID does not exist" containerID="4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.316398 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe"} err="failed to get container status \"4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\": rpc error: code = NotFound desc = could not find container \"4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe\": container with ID starting with 4d83a6290f0c155297d6c656b66ccb518ef35883ef593b02357663d008ec6dbe not found: ID does not exist" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.316440 4799 scope.go:117] "RemoveContainer" containerID="3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823" Feb 16 12:35:48 crc kubenswrapper[4799]: E0216 12:35:48.317207 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\": container with ID starting with 3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823 not found: ID does not exist" containerID="3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.317353 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823"} err="failed to get container status \"3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\": rpc error: code = NotFound desc = could not find container \"3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823\": container with ID starting with 3be9e06749dfa9b94c4c8b1be56eccc4f6f33c076ab8756aec7a592f8b6f9823 not found: ID does not exist" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.317465 4799 scope.go:117] "RemoveContainer" containerID="f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e" Feb 16 12:35:48 crc kubenswrapper[4799]: E0216 12:35:48.318081 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\": container with ID starting with f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e not found: ID does not exist" containerID="f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.318211 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e"} err="failed to get container status \"f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\": rpc error: code = NotFound desc = could not find container \"f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e\": container with ID starting with f21c9869e711fec78b8970f14a5d0f1aa723b01f7c020ed46c46737ebf14e85e not found: ID does not exist" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.318313 4799 scope.go:117] "RemoveContainer" containerID="3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a" Feb 16 12:35:48 crc kubenswrapper[4799]: E0216 12:35:48.318923 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\": container with ID starting with 3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a not found: ID does not exist" containerID="3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a" Feb 16 12:35:48 crc kubenswrapper[4799]: I0216 12:35:48.319022 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a"} err="failed to get container status \"3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\": rpc error: code = NotFound desc = could not find container \"3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a\": container with ID starting with 3832c755d688e4bf466cd12b35ea3293b9260617de040fa4c61c9cd2ac7b6d1a not found: ID does not exist" Feb 16 12:35:49 crc kubenswrapper[4799]: E0216 12:35:49.003520 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Feb 16 12:35:49 crc kubenswrapper[4799]: I0216 12:35:49.161172 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 12:35:49 crc kubenswrapper[4799]: E0216 12:35:49.958329 4799 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:49 crc kubenswrapper[4799]: I0216 12:35:49.958887 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:49 crc kubenswrapper[4799]: W0216 12:35:49.986192 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c82dbd881eb62ec9fc84e6f04ed5c3050afd692baf8e1eca019beb13e8c58035 WatchSource:0}: Error finding container c82dbd881eb62ec9fc84e6f04ed5c3050afd692baf8e1eca019beb13e8c58035: Status 404 returned error can't find the container with id c82dbd881eb62ec9fc84e6f04ed5c3050afd692baf8e1eca019beb13e8c58035 Feb 16 12:35:49 crc kubenswrapper[4799]: E0216 12:35:49.993332 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894ba42e85134ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:35:49.991691436 +0000 UTC m=+255.584706780,LastTimestamp:2026-02-16 12:35:49.991691436 +0000 UTC m=+255.584706780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:35:50 crc kubenswrapper[4799]: I0216 12:35:50.189076 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c82dbd881eb62ec9fc84e6f04ed5c3050afd692baf8e1eca019beb13e8c58035"} Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.177078 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" containerName="oauth-openshift" containerID="cri-o://4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b" gracePeriod=15 Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.198326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ad6a0ba9b5c38b4a3f886bdf8a4cfce7564fa5893c1499c7706ddd34412e0e51"} Feb 16 12:35:51 crc kubenswrapper[4799]: E0216 12:35:51.199349 4799 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.199386 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.811613 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.813005 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.813815 4799 status_manager.go:851] "Failed to get status for pod" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sl8tw\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952169 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-router-certs\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952286 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-trusted-ca-bundle\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952392 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-idp-0-file-data\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952454 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-provider-selection\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952533 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-dir\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952638 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-error\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952730 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-cliconfig\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952796 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-session\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952845 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2w9n\" (UniqueName: \"kubernetes.io/projected/98bb2e4c-5ed3-4d64-b732-e740b80883f5-kube-api-access-g2w9n\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.952939 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-service-ca\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.953028 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-login\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.953082 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-ocp-branding-template\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.953162 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-serving-cert\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.953225 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-policies\") pod \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\" (UID: \"98bb2e4c-5ed3-4d64-b732-e740b80883f5\") " Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.953705 4799 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.954427 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.954639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.954827 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.954948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.963780 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.964190 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bb2e4c-5ed3-4d64-b732-e740b80883f5-kube-api-access-g2w9n" (OuterVolumeSpecName: "kube-api-access-g2w9n") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "kube-api-access-g2w9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.964211 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.966405 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.966931 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.967237 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.967727 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.968149 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:51 crc kubenswrapper[4799]: I0216 12:35:51.968753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "98bb2e4c-5ed3-4d64-b732-e740b80883f5" (UID: "98bb2e4c-5ed3-4d64-b732-e740b80883f5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054789 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054854 4799 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054877 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054897 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054917 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054944 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054969 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.054995 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.055018 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.055039 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2w9n\" (UniqueName: \"kubernetes.io/projected/98bb2e4c-5ed3-4d64-b732-e740b80883f5-kube-api-access-g2w9n\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.055065 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.055093 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.055114 4799 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98bb2e4c-5ed3-4d64-b732-e740b80883f5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 12:35:52 crc kubenswrapper[4799]: E0216 12:35:52.205349 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="6.4s" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.209170 4799 generic.go:334] "Generic (PLEG): container finished" podID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" containerID="4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b" exitCode=0 Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.209257 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.209306 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" event={"ID":"98bb2e4c-5ed3-4d64-b732-e740b80883f5","Type":"ContainerDied","Data":"4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b"} Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.209433 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" event={"ID":"98bb2e4c-5ed3-4d64-b732-e740b80883f5","Type":"ContainerDied","Data":"0095740b5e3aaa73b87ec902763aece31f05c80b2aecc19d8b680dba407cc1dd"} Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.209480 4799 scope.go:117] "RemoveContainer" containerID="4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.211431 4799 status_manager.go:851] "Failed to get status for pod" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sl8tw\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:52 crc kubenswrapper[4799]: E0216 12:35:52.211441 4799 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.212173 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.238385 4799 status_manager.go:851] "Failed to get status for pod" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sl8tw\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.239030 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.257834 4799 scope.go:117] "RemoveContainer" containerID="4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b" Feb 16 12:35:52 crc kubenswrapper[4799]: E0216 12:35:52.260088 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b\": container with ID starting with 4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b not found: ID does not exist" containerID="4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b" Feb 16 12:35:52 crc kubenswrapper[4799]: I0216 12:35:52.260180 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b"} err="failed to get container status \"4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b\": rpc error: code = NotFound desc = could not find container \"4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b\": container with ID starting with 4f98034559f43a8a6e4b93c20bf10197f6075fb14e24a498c4eda969d34fe42b not found: ID does not exist" Feb 16 12:35:55 crc kubenswrapper[4799]: I0216 12:35:55.152607 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:55 crc kubenswrapper[4799]: I0216 12:35:55.153553 4799 status_manager.go:851] "Failed to get status for pod" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sl8tw\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:56 crc kubenswrapper[4799]: E0216 12:35:56.224237 4799 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" volumeName="registry-storage" Feb 16 12:35:57 crc kubenswrapper[4799]: E0216 12:35:57.898529 4799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894ba42e85134ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:35:49.991691436 +0000 UTC m=+255.584706780,LastTimestamp:2026-02-16 12:35:49.991691436 +0000 UTC m=+255.584706780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.148436 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.149766 4799 status_manager.go:851] "Failed to get status for pod" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sl8tw\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.150566 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.175531 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.175590 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:35:58 crc kubenswrapper[4799]: E0216 12:35:58.176372 4799 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.177053 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:35:58 crc kubenswrapper[4799]: I0216 12:35:58.256301 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1c2eeb105e506cbf8ec7b16bdfe22cbb125f7b40ebcb656443565573d4ca9c8c"} Feb 16 12:35:58 crc kubenswrapper[4799]: E0216 12:35:58.607018 4799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="7s" Feb 16 12:35:59 crc kubenswrapper[4799]: I0216 12:35:59.267545 4799 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="91308225b5d1d8b93ccd8ef3200c728505101ecb892ee7608fd49334e8c28eaf" exitCode=0 Feb 16 12:35:59 crc kubenswrapper[4799]: I0216 12:35:59.267643 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"91308225b5d1d8b93ccd8ef3200c728505101ecb892ee7608fd49334e8c28eaf"} Feb 16 12:35:59 crc kubenswrapper[4799]: I0216 12:35:59.268101 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:35:59 crc kubenswrapper[4799]: I0216 12:35:59.268192 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:35:59 crc kubenswrapper[4799]: I0216 12:35:59.268872 4799 status_manager.go:851] "Failed to get status for pod" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:59 crc kubenswrapper[4799]: I0216 12:35:59.269569 4799 status_manager.go:851] "Failed to get status for pod" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" pod="openshift-authentication/oauth-openshift-558db77b4-sl8tw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sl8tw\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 16 12:35:59 crc kubenswrapper[4799]: E0216 12:35:59.269821 4799 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:00 crc kubenswrapper[4799]: I0216 12:36:00.278777 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:36:00 crc kubenswrapper[4799]: I0216 12:36:00.279150 4799 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46" exitCode=1 Feb 16 12:36:00 crc kubenswrapper[4799]: I0216 12:36:00.279241 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46"} Feb 16 12:36:00 crc kubenswrapper[4799]: I0216 12:36:00.279844 4799 scope.go:117] "RemoveContainer" containerID="c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46" Feb 16 12:36:00 crc kubenswrapper[4799]: I0216 12:36:00.282860 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d1e45f674f4b4fe41ac4acc2e19a5983b282761516383d3ab050aa8cd4d72553"} Feb 16 12:36:00 crc kubenswrapper[4799]: I0216 12:36:00.282932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7969194cbd2509d0fc8788a034a2405a1b8c5b8a54cf68d063fb841f519ae737"} Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.301854 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.302556 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4fae881199d8f5efd1f59405d4ad3dffeef130b3a2caf9b3a81f641f6feef9ce"} Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.307365 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"757d491fa4ebddf160de3aef793dbc104e57cb11e4db7965a4d18a18e5cf6960"} Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.307428 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"916374664d1077a39b46dbca3e11c85c341631d9f77da21eea4d1190dc16b492"} Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.307444 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36819bdc619a194c8c7ae48395b5bc6250cf6ce174c467ec5019058f9f29fa72"} Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.307791 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.307820 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:01 crc kubenswrapper[4799]: I0216 12:36:01.308293 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:02 crc kubenswrapper[4799]: I0216 12:36:02.931536 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:36:02 crc kubenswrapper[4799]: I0216 12:36:02.931840 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 12:36:02 crc kubenswrapper[4799]: I0216 12:36:02.931912 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 12:36:03 crc kubenswrapper[4799]: I0216 12:36:03.177383 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:03 crc kubenswrapper[4799]: I0216 12:36:03.177462 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:03 crc kubenswrapper[4799]: I0216 12:36:03.184683 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:06 crc kubenswrapper[4799]: I0216 12:36:06.334995 4799 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:06 crc kubenswrapper[4799]: I0216 12:36:06.446805 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b6686515-18dd-43dd-b030-9a7955b082ab" Feb 16 12:36:07 crc kubenswrapper[4799]: I0216 12:36:07.358497 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:07 crc kubenswrapper[4799]: I0216 12:36:07.358903 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:07 crc kubenswrapper[4799]: I0216 12:36:07.365519 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:07 crc kubenswrapper[4799]: I0216 12:36:07.366726 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b6686515-18dd-43dd-b030-9a7955b082ab" Feb 16 12:36:08 crc kubenswrapper[4799]: I0216 12:36:08.366936 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:08 crc kubenswrapper[4799]: I0216 12:36:08.367004 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:08 crc kubenswrapper[4799]: I0216 12:36:08.372252 4799 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b6686515-18dd-43dd-b030-9a7955b082ab" Feb 16 12:36:08 crc kubenswrapper[4799]: I0216 12:36:08.763229 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:36:12 crc kubenswrapper[4799]: I0216 12:36:12.932885 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 12:36:12 crc kubenswrapper[4799]: I0216 12:36:12.933756 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 12:36:16 crc kubenswrapper[4799]: I0216 12:36:16.620743 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 12:36:16 crc kubenswrapper[4799]: I0216 12:36:16.633998 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 12:36:16 crc kubenswrapper[4799]: I0216 12:36:16.851412 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 12:36:16 crc kubenswrapper[4799]: I0216 12:36:16.900631 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.075893 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.202028 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.262232 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.667173 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.764117 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.816231 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 12:36:17 crc kubenswrapper[4799]: I0216 12:36:17.953769 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 12:36:18 crc kubenswrapper[4799]: I0216 12:36:18.041586 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 12:36:18 crc kubenswrapper[4799]: I0216 12:36:18.247771 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 12:36:18 crc kubenswrapper[4799]: I0216 12:36:18.611822 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 12:36:18 crc kubenswrapper[4799]: I0216 12:36:18.710834 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 12:36:18 crc kubenswrapper[4799]: I0216 12:36:18.731539 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 12:36:18 crc kubenswrapper[4799]: I0216 12:36:18.827467 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.409827 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.487760 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.496601 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.533361 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.577087 4799 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.650452 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.690656 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 12:36:19 crc kubenswrapper[4799]: I0216 12:36:19.755528 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.034411 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.048681 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.063943 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.190967 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.218906 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.303419 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.309705 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.409723 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.410246 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.414113 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.579782 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.585657 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.655511 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.655543 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.751919 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.793014 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.851415 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.862138 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.895896 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 12:36:20 crc kubenswrapper[4799]: I0216 12:36:20.919439 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.060835 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.133840 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.179706 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.195428 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.232891 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.256394 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.342918 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.375591 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.377739 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.485649 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.626012 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.657816 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.753358 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.769192 4799 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.796033 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.801648 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.890987 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.973377 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.985879 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 12:36:21 crc kubenswrapper[4799]: I0216 12:36:21.989660 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.022676 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.151712 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.201987 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.249377 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.252642 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.321976 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.381217 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.423091 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.472392 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.505756 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.584703 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.586865 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.676502 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.704905 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.827597 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.838659 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.859022 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.890951 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.918619 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.932165 4799 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.932277 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.932387 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.933771 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"4fae881199d8f5efd1f59405d4ad3dffeef130b3a2caf9b3a81f641f6feef9ce"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.934070 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://4fae881199d8f5efd1f59405d4ad3dffeef130b3a2caf9b3a81f641f6feef9ce" gracePeriod=30 Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.948965 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.956624 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 12:36:22 crc kubenswrapper[4799]: I0216 12:36:22.997781 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.072902 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.073382 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.073463 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.113504 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.140108 4799 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.529222 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.535509 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.557302 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.566466 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.585450 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.619080 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.622441 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.748253 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 12:36:23 crc kubenswrapper[4799]: I0216 12:36:23.845655 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.069846 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.225039 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.227024 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.344694 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.376470 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.377099 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.476045 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.492609 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.597525 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.661423 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.722594 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.748092 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.913224 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 12:36:24 crc kubenswrapper[4799]: I0216 12:36:24.949397 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.012160 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.112410 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.282764 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.398592 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.416961 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.431901 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.462327 4799 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.498845 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.568015 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.630101 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.638797 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.659755 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.686407 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.700350 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.761784 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.857515 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 12:36:25 crc kubenswrapper[4799]: I0216 12:36:25.979047 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.048899 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.238284 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.243735 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.246738 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.264870 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.271775 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.308099 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.354399 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.395314 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.402636 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.466896 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.493094 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.494689 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.544826 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.574095 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.621007 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.680926 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.688500 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.753100 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.778120 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.792296 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.797586 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.843675 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.909690 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.918537 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.949193 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:36:26 crc kubenswrapper[4799]: I0216 12:36:26.988159 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.060119 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.186765 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.191375 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.211242 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.339359 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.361937 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.369841 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.394603 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.408340 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.573203 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.590624 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.667120 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.690988 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.716624 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.820623 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.849848 4799 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.855588 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-sl8tw"] Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.855685 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5686c9c7dd-99t72","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:36:27 crc kubenswrapper[4799]: E0216 12:36:27.855944 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" containerName="oauth-openshift" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.855968 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" containerName="oauth-openshift" Feb 16 12:36:27 crc kubenswrapper[4799]: E0216 12:36:27.855985 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" containerName="installer" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.855993 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" containerName="installer" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.856151 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" containerName="oauth-openshift" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.856170 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="159c1a7c-133c-47d5-990d-c0869b0eafa4" containerName="installer" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.856426 4799 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.856476 4799 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56e971d9-2ab6-4f2e-ad1a-979f4213dfea" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.856693 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.860755 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.862839 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.862914 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.865612 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.866705 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.867730 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.867922 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.867754 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.868933 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.871180 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.871265 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.871421 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.874171 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.881922 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.884743 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.899695 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.899666766 podStartE2EDuration="21.899666766s" podCreationTimestamp="2026-02-16 12:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:36:27.893965905 +0000 UTC m=+293.486981269" watchObservedRunningTime="2026-02-16 12:36:27.899666766 +0000 UTC m=+293.492682110" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.919535 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:36:27 crc kubenswrapper[4799]: I0216 12:36:27.995099 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.003613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-audit-policies\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.003712 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-session\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.003771 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.003830 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h22v\" (UniqueName: \"kubernetes.io/projected/41da13d9-1c4b-4f58-8363-80a6b3b021e6-kube-api-access-2h22v\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.003870 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41da13d9-1c4b-4f58-8363-80a6b3b021e6-audit-dir\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.003920 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004032 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004070 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004151 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004229 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004394 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-error\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004458 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-login\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004507 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.004539 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.024688 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.037575 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105494 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41da13d9-1c4b-4f58-8363-80a6b3b021e6-audit-dir\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105548 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h22v\" (UniqueName: \"kubernetes.io/projected/41da13d9-1c4b-4f58-8363-80a6b3b021e6-kube-api-access-2h22v\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105579 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105618 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105642 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105671 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105701 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105730 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-error\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105749 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-login\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105772 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105792 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105813 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-audit-policies\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-session\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.105851 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.107062 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.107062 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.107573 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-audit-policies\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.107645 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41da13d9-1c4b-4f58-8363-80a6b3b021e6-audit-dir\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.108183 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.115019 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-login\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.115003 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.116241 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.117048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.117051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-session\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.117949 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.118539 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.122833 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.124275 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/41da13d9-1c4b-4f58-8363-80a6b3b021e6-v4-0-config-user-template-error\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.128776 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.130478 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h22v\" (UniqueName: \"kubernetes.io/projected/41da13d9-1c4b-4f58-8363-80a6b3b021e6-kube-api-access-2h22v\") pod \"oauth-openshift-5686c9c7dd-99t72\" (UID: \"41da13d9-1c4b-4f58-8363-80a6b3b021e6\") " pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.142079 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.181644 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.206263 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.247422 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.361754 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.525230 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.554960 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.612693 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.684293 4799 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.820048 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.885368 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.903428 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.920540 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.923071 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.924208 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 12:36:28 crc kubenswrapper[4799]: I0216 12:36:28.987299 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.027231 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.081237 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.099873 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.105593 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.159398 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bb2e4c-5ed3-4d64-b732-e740b80883f5" path="/var/lib/kubelet/pods/98bb2e4c-5ed3-4d64-b732-e740b80883f5/volumes" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.191383 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.202488 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.226011 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.264397 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.269235 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.747949 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 12:36:29 crc kubenswrapper[4799]: I0216 12:36:29.825599 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.013094 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.025360 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.109113 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.123512 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.154100 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.263405 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.294183 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.327378 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.396625 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.407734 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.498215 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.558909 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.619387 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.624519 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.691921 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.855962 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 12:36:30 crc kubenswrapper[4799]: I0216 12:36:30.957365 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.116613 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.166213 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.189828 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.194602 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.339440 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.370482 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.432279 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.631866 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.648435 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.746706 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 12:36:31 crc kubenswrapper[4799]: I0216 12:36:31.773616 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:36:32 crc kubenswrapper[4799]: I0216 12:36:32.095766 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5686c9c7dd-99t72"] Feb 16 12:36:32 crc kubenswrapper[4799]: I0216 12:36:32.212916 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:36:32 crc kubenswrapper[4799]: I0216 12:36:32.407412 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 12:36:32 crc kubenswrapper[4799]: I0216 12:36:32.678612 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5686c9c7dd-99t72"] Feb 16 12:36:33 crc kubenswrapper[4799]: I0216 12:36:33.567260 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" event={"ID":"41da13d9-1c4b-4f58-8363-80a6b3b021e6","Type":"ContainerStarted","Data":"b66604d33aed632c86e801cc97ec50dabf310ce256c76cdb94ad01b7c603c200"} Feb 16 12:36:33 crc kubenswrapper[4799]: I0216 12:36:33.567334 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" event={"ID":"41da13d9-1c4b-4f58-8363-80a6b3b021e6","Type":"ContainerStarted","Data":"8d2d079610ecbdc39d4aa9e6dd7086341141c172c76187ac6099ae9f07894066"} Feb 16 12:36:33 crc kubenswrapper[4799]: I0216 12:36:33.568100 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:33 crc kubenswrapper[4799]: I0216 12:36:33.575540 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" Feb 16 12:36:33 crc kubenswrapper[4799]: I0216 12:36:33.601237 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5686c9c7dd-99t72" podStartSLOduration=67.601212524 podStartE2EDuration="1m7.601212524s" podCreationTimestamp="2026-02-16 12:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:36:33.598536244 +0000 UTC m=+299.191551618" watchObservedRunningTime="2026-02-16 12:36:33.601212524 +0000 UTC m=+299.194227868" Feb 16 12:36:33 crc kubenswrapper[4799]: I0216 12:36:33.907461 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 12:36:34 crc kubenswrapper[4799]: I0216 12:36:34.915809 4799 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 12:36:39 crc kubenswrapper[4799]: I0216 12:36:39.208226 4799 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:36:39 crc kubenswrapper[4799]: I0216 12:36:39.209211 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ad6a0ba9b5c38b4a3f886bdf8a4cfce7564fa5893c1499c7706ddd34412e0e51" gracePeriod=5 Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.685323 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.685835 4799 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ad6a0ba9b5c38b4a3f886bdf8a4cfce7564fa5893c1499c7706ddd34412e0e51" exitCode=137 Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.810726 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.810916 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.885981 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886070 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886113 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886187 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886261 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886322 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886459 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886491 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886552 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886566 4799 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.886739 4799 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.899120 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.988305 4799 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.988927 4799 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 12:36:44 crc kubenswrapper[4799]: I0216 12:36:44.988948 4799 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:36:45 crc kubenswrapper[4799]: I0216 12:36:45.163578 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 12:36:45 crc kubenswrapper[4799]: I0216 12:36:45.700359 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:36:45 crc kubenswrapper[4799]: I0216 12:36:45.700477 4799 scope.go:117] "RemoveContainer" containerID="ad6a0ba9b5c38b4a3f886bdf8a4cfce7564fa5893c1499c7706ddd34412e0e51" Feb 16 12:36:45 crc kubenswrapper[4799]: I0216 12:36:45.700629 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:36:53 crc kubenswrapper[4799]: I0216 12:36:53.768013 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 12:36:53 crc kubenswrapper[4799]: I0216 12:36:53.772522 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:36:53 crc kubenswrapper[4799]: I0216 12:36:53.772611 4799 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4fae881199d8f5efd1f59405d4ad3dffeef130b3a2caf9b3a81f641f6feef9ce" exitCode=137 Feb 16 12:36:53 crc kubenswrapper[4799]: I0216 12:36:53.772669 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4fae881199d8f5efd1f59405d4ad3dffeef130b3a2caf9b3a81f641f6feef9ce"} Feb 16 12:36:53 crc kubenswrapper[4799]: I0216 12:36:53.772727 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b62e889bfdba20602df8525fe0c33ff0ca89b3f2d22b5c9599ea1d1622df5afb"} Feb 16 12:36:53 crc kubenswrapper[4799]: I0216 12:36:53.772763 4799 scope.go:117] "RemoveContainer" containerID="c6cc6a02dc75976bcaeef7745d460bb2f856d17633820b33d9a05a17ef900f46" Feb 16 12:36:54 crc kubenswrapper[4799]: I0216 12:36:54.782099 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 12:36:58 crc kubenswrapper[4799]: I0216 12:36:58.762768 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:37:02 crc kubenswrapper[4799]: I0216 12:37:02.932377 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:37:02 crc kubenswrapper[4799]: I0216 12:37:02.937337 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:37:03 crc kubenswrapper[4799]: I0216 12:37:03.884281 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:37:51 crc kubenswrapper[4799]: I0216 12:37:51.793236 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:37:51 crc kubenswrapper[4799]: I0216 12:37:51.794403 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.135206 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xl79d"] Feb 16 12:38:04 crc kubenswrapper[4799]: E0216 12:38:04.137286 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.137316 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.137504 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.138071 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.147822 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xl79d"] Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.295754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-registry-tls\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296039 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-trusted-ca\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296156 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpx9k\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-kube-api-access-vpx9k\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296199 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-bound-sa-token\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296313 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-registry-certificates\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296503 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.296615 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.346810 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398372 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398453 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-registry-certificates\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398510 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398559 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-registry-tls\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398612 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-trusted-ca\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpx9k\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-kube-api-access-vpx9k\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.398671 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-bound-sa-token\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.399078 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.400893 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-registry-certificates\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.401247 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-trusted-ca\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.409972 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-registry-tls\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.409994 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.417752 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-bound-sa-token\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.419213 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpx9k\" (UniqueName: \"kubernetes.io/projected/7ff5cc96-a935-445e-bf60-45aaefdc4a2b-kube-api-access-vpx9k\") pod \"image-registry-66df7c8f76-xl79d\" (UID: \"7ff5cc96-a935-445e-bf60-45aaefdc4a2b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.457355 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:04 crc kubenswrapper[4799]: I0216 12:38:04.661270 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xl79d"] Feb 16 12:38:05 crc kubenswrapper[4799]: I0216 12:38:05.294751 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" event={"ID":"7ff5cc96-a935-445e-bf60-45aaefdc4a2b","Type":"ContainerStarted","Data":"67d12dba021040f44ec9f04dd48f7cf81dc5a1de51ae1f272f454e7e3026a829"} Feb 16 12:38:05 crc kubenswrapper[4799]: I0216 12:38:05.295727 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" event={"ID":"7ff5cc96-a935-445e-bf60-45aaefdc4a2b","Type":"ContainerStarted","Data":"260ef78c6a99f5cfabe360f25965d9c3b574e56af7a25f228a3d9035e4722d3c"} Feb 16 12:38:05 crc kubenswrapper[4799]: I0216 12:38:05.297153 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:05 crc kubenswrapper[4799]: I0216 12:38:05.319151 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" podStartSLOduration=1.319115133 podStartE2EDuration="1.319115133s" podCreationTimestamp="2026-02-16 12:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:38:05.31772551 +0000 UTC m=+390.910740854" watchObservedRunningTime="2026-02-16 12:38:05.319115133 +0000 UTC m=+390.912130467" Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.806187 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fs5dc"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.808669 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fs5dc" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="registry-server" containerID="cri-o://62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2" gracePeriod=30 Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.817226 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xm7s"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.817593 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xm7s" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="registry-server" containerID="cri-o://9214ed7439fc51c805423078563e24039276b5ac13330c567f3871332ab3dee5" gracePeriod=30 Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.822679 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wrg52"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.823031 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerName="marketplace-operator" containerID="cri-o://6ea1c32423ae94cb1936bb4a541e60f2b4ca6f6b792b5af8b1b01b2a731a08df" gracePeriod=30 Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.829262 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wfjv"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.829587 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wfjv" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="registry-server" containerID="cri-o://6467821647ebdb6b790e04c6d718aaa433ef1e6354d0e453bf6204b8083f34bd" gracePeriod=30 Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.853059 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgm8v"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.853483 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jgm8v" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="registry-server" containerID="cri-o://bdebac4d576fb26fe50d11a16cb7525aa312ee419bd43e0840f4df0981ebd221" gracePeriod=30 Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.884721 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qb8p5"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.886499 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.891189 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qb8p5"] Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.900323 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5wfjv" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="registry-server" probeResult="failure" output="" Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.938943 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.939020 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9n2q\" (UniqueName: \"kubernetes.io/projected/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-kube-api-access-b9n2q\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:06 crc kubenswrapper[4799]: I0216 12:38:06.939065 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.040854 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9n2q\" (UniqueName: \"kubernetes.io/projected/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-kube-api-access-b9n2q\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.041508 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.041586 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.046402 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.050692 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.061447 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9n2q\" (UniqueName: \"kubernetes.io/projected/a8b56ef0-6df7-4a6a-a550-b0699ebaf909-kube-api-access-b9n2q\") pod \"marketplace-operator-79b997595-qb8p5\" (UID: \"a8b56ef0-6df7-4a6a-a550-b0699ebaf909\") " pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.313895 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.316840 4799 generic.go:334] "Generic (PLEG): container finished" podID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerID="bdebac4d576fb26fe50d11a16cb7525aa312ee419bd43e0840f4df0981ebd221" exitCode=0 Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.316932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerDied","Data":"bdebac4d576fb26fe50d11a16cb7525aa312ee419bd43e0840f4df0981ebd221"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.322706 4799 generic.go:334] "Generic (PLEG): container finished" podID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerID="6467821647ebdb6b790e04c6d718aaa433ef1e6354d0e453bf6204b8083f34bd" exitCode=0 Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.322786 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wfjv" event={"ID":"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d","Type":"ContainerDied","Data":"6467821647ebdb6b790e04c6d718aaa433ef1e6354d0e453bf6204b8083f34bd"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.326756 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.329622 4799 generic.go:334] "Generic (PLEG): container finished" podID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerID="9214ed7439fc51c805423078563e24039276b5ac13330c567f3871332ab3dee5" exitCode=0 Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.329694 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm7s" event={"ID":"6734f76c-775d-47c3-8c54-e7c3e25a4575","Type":"ContainerDied","Data":"9214ed7439fc51c805423078563e24039276b5ac13330c567f3871332ab3dee5"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.329723 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xm7s" event={"ID":"6734f76c-775d-47c3-8c54-e7c3e25a4575","Type":"ContainerDied","Data":"97eb50f0ceb673e0bfa79bc99647a6688866b452c0030b9eeb98de3127f3449e"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.329735 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97eb50f0ceb673e0bfa79bc99647a6688866b452c0030b9eeb98de3127f3449e" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.336886 4799 generic.go:334] "Generic (PLEG): container finished" podID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerID="6ea1c32423ae94cb1936bb4a541e60f2b4ca6f6b792b5af8b1b01b2a731a08df" exitCode=0 Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.337095 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" event={"ID":"ffbd79e8-b486-40f6-bc8a-94a92f32a71e","Type":"ContainerDied","Data":"6ea1c32423ae94cb1936bb4a541e60f2b4ca6f6b792b5af8b1b01b2a731a08df"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.337210 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" event={"ID":"ffbd79e8-b486-40f6-bc8a-94a92f32a71e","Type":"ContainerDied","Data":"af1fd1977c4283863249b9f448c648f71a1ca424f81ba2e926a6583cdbc8e6cc"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.337301 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1fd1977c4283863249b9f448c648f71a1ca424f81ba2e926a6583cdbc8e6cc" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.339238 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.339949 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.341532 4799 generic.go:334] "Generic (PLEG): container finished" podID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerID="62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2" exitCode=0 Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.342367 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fs5dc" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.342500 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fs5dc" event={"ID":"3c8b6238-00b9-48d2-b1f5-4375b0555da6","Type":"ContainerDied","Data":"62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.342535 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fs5dc" event={"ID":"3c8b6238-00b9-48d2-b1f5-4375b0555da6","Type":"ContainerDied","Data":"741d0600e89e4abd70a1a39e77f9300c434d4f5731aa67077c4fdc3815063000"} Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.342555 4799 scope.go:117] "RemoveContainer" containerID="62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.349256 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.380458 4799 scope.go:117] "RemoveContainer" containerID="8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.405473 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.418061 4799 scope.go:117] "RemoveContainer" containerID="eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.449801 4799 scope.go:117] "RemoveContainer" containerID="62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.450862 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-catalog-content\") pod \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.450900 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cndf8\" (UniqueName: \"kubernetes.io/projected/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-kube-api-access-cndf8\") pod \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.450923 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7t92\" (UniqueName: \"kubernetes.io/projected/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-kube-api-access-g7t92\") pod \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.450960 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btnxs\" (UniqueName: \"kubernetes.io/projected/a302cd9c-7040-4248-8fc0-55d280e45b9e-kube-api-access-btnxs\") pod \"a302cd9c-7040-4248-8fc0-55d280e45b9e\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.450990 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-utilities\") pod \"6734f76c-775d-47c3-8c54-e7c3e25a4575\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458314 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4729\" (UniqueName: \"kubernetes.io/projected/6734f76c-775d-47c3-8c54-e7c3e25a4575-kube-api-access-h4729\") pod \"6734f76c-775d-47c3-8c54-e7c3e25a4575\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458374 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-utilities\") pod \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\" (UID: \"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458427 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-utilities\") pod \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458512 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-trusted-ca\") pod \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458549 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-operator-metrics\") pod \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\" (UID: \"ffbd79e8-b486-40f6-bc8a-94a92f32a71e\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458574 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-utilities\") pod \"a302cd9c-7040-4248-8fc0-55d280e45b9e\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458614 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-catalog-content\") pod \"a302cd9c-7040-4248-8fc0-55d280e45b9e\" (UID: \"a302cd9c-7040-4248-8fc0-55d280e45b9e\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458646 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-catalog-content\") pod \"6734f76c-775d-47c3-8c54-e7c3e25a4575\" (UID: \"6734f76c-775d-47c3-8c54-e7c3e25a4575\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458675 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skx5j\" (UniqueName: \"kubernetes.io/projected/3c8b6238-00b9-48d2-b1f5-4375b0555da6-kube-api-access-skx5j\") pod \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458700 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-catalog-content\") pod \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\" (UID: \"3c8b6238-00b9-48d2-b1f5-4375b0555da6\") " Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.458963 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-kube-api-access-g7t92" (OuterVolumeSpecName: "kube-api-access-g7t92") pod "897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" (UID: "897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d"). InnerVolumeSpecName "kube-api-access-g7t92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.459371 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7t92\" (UniqueName: \"kubernetes.io/projected/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-kube-api-access-g7t92\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.461860 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8b6238-00b9-48d2-b1f5-4375b0555da6-kube-api-access-skx5j" (OuterVolumeSpecName: "kube-api-access-skx5j") pod "3c8b6238-00b9-48d2-b1f5-4375b0555da6" (UID: "3c8b6238-00b9-48d2-b1f5-4375b0555da6"). InnerVolumeSpecName "kube-api-access-skx5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.462698 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ffbd79e8-b486-40f6-bc8a-94a92f32a71e" (UID: "ffbd79e8-b486-40f6-bc8a-94a92f32a71e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.465471 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ffbd79e8-b486-40f6-bc8a-94a92f32a71e" (UID: "ffbd79e8-b486-40f6-bc8a-94a92f32a71e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.466382 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-utilities" (OuterVolumeSpecName: "utilities") pod "a302cd9c-7040-4248-8fc0-55d280e45b9e" (UID: "a302cd9c-7040-4248-8fc0-55d280e45b9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: E0216 12:38:07.469271 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2\": container with ID starting with 62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2 not found: ID does not exist" containerID="62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.469360 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2"} err="failed to get container status \"62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2\": rpc error: code = NotFound desc = could not find container \"62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2\": container with ID starting with 62964c494a21d5fbe21a3cf32f1079bcf336714308daa2c1c4c71d052912c3a2 not found: ID does not exist" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.469402 4799 scope.go:117] "RemoveContainer" containerID="8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.470633 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-utilities" (OuterVolumeSpecName: "utilities") pod "6734f76c-775d-47c3-8c54-e7c3e25a4575" (UID: "6734f76c-775d-47c3-8c54-e7c3e25a4575"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.472500 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-kube-api-access-cndf8" (OuterVolumeSpecName: "kube-api-access-cndf8") pod "ffbd79e8-b486-40f6-bc8a-94a92f32a71e" (UID: "ffbd79e8-b486-40f6-bc8a-94a92f32a71e"). InnerVolumeSpecName "kube-api-access-cndf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.474317 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-utilities" (OuterVolumeSpecName: "utilities") pod "3c8b6238-00b9-48d2-b1f5-4375b0555da6" (UID: "3c8b6238-00b9-48d2-b1f5-4375b0555da6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: E0216 12:38:07.475004 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8\": container with ID starting with 8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8 not found: ID does not exist" containerID="8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.475061 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8"} err="failed to get container status \"8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8\": rpc error: code = NotFound desc = could not find container \"8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8\": container with ID starting with 8dba966e93a9c7e2147f91b038dbf70cd54bf3469f050409697d5a06b12b47e8 not found: ID does not exist" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.475106 4799 scope.go:117] "RemoveContainer" containerID="eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77" Feb 16 12:38:07 crc kubenswrapper[4799]: E0216 12:38:07.475504 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77\": container with ID starting with eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77 not found: ID does not exist" containerID="eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.475535 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77"} err="failed to get container status \"eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77\": rpc error: code = NotFound desc = could not find container \"eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77\": container with ID starting with eac340c3302466897361056ff63ab02a7ccea29a88f93ab8d8c20f8d7adcea77 not found: ID does not exist" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.478320 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-utilities" (OuterVolumeSpecName: "utilities") pod "897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" (UID: "897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.479175 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6734f76c-775d-47c3-8c54-e7c3e25a4575-kube-api-access-h4729" (OuterVolumeSpecName: "kube-api-access-h4729") pod "6734f76c-775d-47c3-8c54-e7c3e25a4575" (UID: "6734f76c-775d-47c3-8c54-e7c3e25a4575"). InnerVolumeSpecName "kube-api-access-h4729". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.499104 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" (UID: "897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.513271 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a302cd9c-7040-4248-8fc0-55d280e45b9e-kube-api-access-btnxs" (OuterVolumeSpecName: "kube-api-access-btnxs") pod "a302cd9c-7040-4248-8fc0-55d280e45b9e" (UID: "a302cd9c-7040-4248-8fc0-55d280e45b9e"). InnerVolumeSpecName "kube-api-access-btnxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.556805 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6734f76c-775d-47c3-8c54-e7c3e25a4575" (UID: "6734f76c-775d-47c3-8c54-e7c3e25a4575"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561052 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cndf8\" (UniqueName: \"kubernetes.io/projected/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-kube-api-access-cndf8\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561116 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btnxs\" (UniqueName: \"kubernetes.io/projected/a302cd9c-7040-4248-8fc0-55d280e45b9e-kube-api-access-btnxs\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561153 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561169 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4729\" (UniqueName: \"kubernetes.io/projected/6734f76c-775d-47c3-8c54-e7c3e25a4575-kube-api-access-h4729\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561203 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561215 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561225 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561234 4799 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffbd79e8-b486-40f6-bc8a-94a92f32a71e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561245 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561271 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6734f76c-775d-47c3-8c54-e7c3e25a4575-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561282 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skx5j\" (UniqueName: \"kubernetes.io/projected/3c8b6238-00b9-48d2-b1f5-4375b0555da6-kube-api-access-skx5j\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.561291 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.580496 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c8b6238-00b9-48d2-b1f5-4375b0555da6" (UID: "3c8b6238-00b9-48d2-b1f5-4375b0555da6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.611520 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qb8p5"] Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.648724 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a302cd9c-7040-4248-8fc0-55d280e45b9e" (UID: "a302cd9c-7040-4248-8fc0-55d280e45b9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.663023 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a302cd9c-7040-4248-8fc0-55d280e45b9e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.663080 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8b6238-00b9-48d2-b1f5-4375b0555da6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.696903 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fs5dc"] Feb 16 12:38:07 crc kubenswrapper[4799]: I0216 12:38:07.700735 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fs5dc"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.350360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgm8v" event={"ID":"a302cd9c-7040-4248-8fc0-55d280e45b9e","Type":"ContainerDied","Data":"9a1fea5518c41a939ea61102a13d633354cf10c2bb29fee60a196969b930fd2f"} Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.350405 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgm8v" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.350894 4799 scope.go:117] "RemoveContainer" containerID="bdebac4d576fb26fe50d11a16cb7525aa312ee419bd43e0840f4df0981ebd221" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.353759 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wfjv" event={"ID":"897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d","Type":"ContainerDied","Data":"e5e9552ed59b81498288a081b92a9f1d6297ce7c836a37cdb5dface96d4b7791"} Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.353829 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wfjv" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.355622 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" event={"ID":"a8b56ef0-6df7-4a6a-a550-b0699ebaf909","Type":"ContainerStarted","Data":"680b33b45c7792b8edb4c0cc612a7e47473cecd7aafdcee3a1b806fdc51917c0"} Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.355664 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" event={"ID":"a8b56ef0-6df7-4a6a-a550-b0699ebaf909","Type":"ContainerStarted","Data":"cb6f79d121cd21338efe8a257bac75a07e40e279202443bb6cc5284a221104d6"} Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.356280 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.357045 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wrg52" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.357096 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xm7s" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.364294 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.373515 4799 scope.go:117] "RemoveContainer" containerID="250cb01f10fc3de81937d81b18153c11d970a5768d2282e9142076c8064c3438" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.386730 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qb8p5" podStartSLOduration=2.386687235 podStartE2EDuration="2.386687235s" podCreationTimestamp="2026-02-16 12:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:38:08.381215657 +0000 UTC m=+393.974231001" watchObservedRunningTime="2026-02-16 12:38:08.386687235 +0000 UTC m=+393.979702569" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.406050 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wrg52"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.411453 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wrg52"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.418761 4799 scope.go:117] "RemoveContainer" containerID="0dec1c5253cdd1366578b3de392f0591363f882ca3140969eaf142c95d0286a0" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.466133 4799 scope.go:117] "RemoveContainer" containerID="6467821647ebdb6b790e04c6d718aaa433ef1e6354d0e453bf6204b8083f34bd" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.485381 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wfjv"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.500682 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wfjv"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.506271 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xm7s"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.508179 4799 scope.go:117] "RemoveContainer" containerID="28bf16b64351da15b7c9f7f73f8ef4a5b57c00ee224a9b8efe1e6cf110f560f6" Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.511014 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xm7s"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.514330 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgm8v"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.519479 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jgm8v"] Feb 16 12:38:08 crc kubenswrapper[4799]: I0216 12:38:08.524558 4799 scope.go:117] "RemoveContainer" containerID="c0b552150b5f198afb5d2bc1132d179dd7571b4044a8b4c3fd15530321dcbba2" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.160377 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" path="/var/lib/kubelet/pods/3c8b6238-00b9-48d2-b1f5-4375b0555da6/volumes" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.161754 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" path="/var/lib/kubelet/pods/6734f76c-775d-47c3-8c54-e7c3e25a4575/volumes" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.163181 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" path="/var/lib/kubelet/pods/897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d/volumes" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.165823 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" path="/var/lib/kubelet/pods/a302cd9c-7040-4248-8fc0-55d280e45b9e/volumes" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.167212 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" path="/var/lib/kubelet/pods/ffbd79e8-b486-40f6-bc8a-94a92f32a71e/volumes" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.830625 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bl8v2"] Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.831404 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.831478 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.831644 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.831662 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.831737 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832212 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832238 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832298 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832318 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832332 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832400 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832422 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832495 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832511 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832531 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832601 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832734 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832836 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="extract-utilities" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.832933 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.832948 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.833012 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerName="marketplace-operator" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833027 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerName="marketplace-operator" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.833053 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833167 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: E0216 12:38:09.833194 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833206 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="extract-content" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833571 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffbd79e8-b486-40f6-bc8a-94a92f32a71e" containerName="marketplace-operator" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833620 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6734f76c-775d-47c3-8c54-e7c3e25a4575" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833643 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8b6238-00b9-48d2-b1f5-4375b0555da6" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833662 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a302cd9c-7040-4248-8fc0-55d280e45b9e" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.833682 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="897ba2bf-ebcc-4c1d-bad1-78ecbb07c57d" containerName="registry-server" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.840245 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.841120 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl8v2"] Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.843052 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.914087 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fa4a8e-8c8a-4317-a695-7430ccad4dea-utilities\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.914541 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fa4a8e-8c8a-4317-a695-7430ccad4dea-catalog-content\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:09 crc kubenswrapper[4799]: I0216 12:38:09.914682 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsglx\" (UniqueName: \"kubernetes.io/projected/06fa4a8e-8c8a-4317-a695-7430ccad4dea-kube-api-access-gsglx\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.016702 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fa4a8e-8c8a-4317-a695-7430ccad4dea-utilities\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.017102 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fa4a8e-8c8a-4317-a695-7430ccad4dea-catalog-content\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.017200 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsglx\" (UniqueName: \"kubernetes.io/projected/06fa4a8e-8c8a-4317-a695-7430ccad4dea-kube-api-access-gsglx\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.017570 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fa4a8e-8c8a-4317-a695-7430ccad4dea-utilities\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.018213 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fa4a8e-8c8a-4317-a695-7430ccad4dea-catalog-content\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.046876 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsglx\" (UniqueName: \"kubernetes.io/projected/06fa4a8e-8c8a-4317-a695-7430ccad4dea-kube-api-access-gsglx\") pod \"redhat-operators-bl8v2\" (UID: \"06fa4a8e-8c8a-4317-a695-7430ccad4dea\") " pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.173272 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.424288 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lxfqt"] Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.426710 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.429151 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.432969 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxfqt"] Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.527720 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqfxd\" (UniqueName: \"kubernetes.io/projected/d89b19db-d98a-4004-b73c-8bb54ddf632d-kube-api-access-rqfxd\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.527909 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-utilities\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.527947 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-catalog-content\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.618321 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl8v2"] Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.638953 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqfxd\" (UniqueName: \"kubernetes.io/projected/d89b19db-d98a-4004-b73c-8bb54ddf632d-kube-api-access-rqfxd\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.639296 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-utilities\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.639404 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-catalog-content\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.640093 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-utilities\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.641245 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-catalog-content\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.663330 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqfxd\" (UniqueName: \"kubernetes.io/projected/d89b19db-d98a-4004-b73c-8bb54ddf632d-kube-api-access-rqfxd\") pod \"certified-operators-lxfqt\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:10 crc kubenswrapper[4799]: I0216 12:38:10.794462 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.020212 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxfqt"] Feb 16 12:38:11 crc kubenswrapper[4799]: W0216 12:38:11.030342 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd89b19db_d98a_4004_b73c_8bb54ddf632d.slice/crio-ffd7103119cb4a2e9eaee70d6dff5010c3adfed6ea77df3f561740a55f41cf21 WatchSource:0}: Error finding container ffd7103119cb4a2e9eaee70d6dff5010c3adfed6ea77df3f561740a55f41cf21: Status 404 returned error can't find the container with id ffd7103119cb4a2e9eaee70d6dff5010c3adfed6ea77df3f561740a55f41cf21 Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.383277 4799 generic.go:334] "Generic (PLEG): container finished" podID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerID="4bda4f5356e5c224e7793a3446e1c92163f4e69c5c711dd530e643741f53d58d" exitCode=0 Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.383394 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxfqt" event={"ID":"d89b19db-d98a-4004-b73c-8bb54ddf632d","Type":"ContainerDied","Data":"4bda4f5356e5c224e7793a3446e1c92163f4e69c5c711dd530e643741f53d58d"} Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.383885 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxfqt" event={"ID":"d89b19db-d98a-4004-b73c-8bb54ddf632d","Type":"ContainerStarted","Data":"ffd7103119cb4a2e9eaee70d6dff5010c3adfed6ea77df3f561740a55f41cf21"} Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.385434 4799 generic.go:334] "Generic (PLEG): container finished" podID="06fa4a8e-8c8a-4317-a695-7430ccad4dea" containerID="29647cbaf02f8a664e4bc55b40fea60a7528109c73f9c9c2f2bdaf515ca158d1" exitCode=0 Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.385496 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl8v2" event={"ID":"06fa4a8e-8c8a-4317-a695-7430ccad4dea","Type":"ContainerDied","Data":"29647cbaf02f8a664e4bc55b40fea60a7528109c73f9c9c2f2bdaf515ca158d1"} Feb 16 12:38:11 crc kubenswrapper[4799]: I0216 12:38:11.385544 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl8v2" event={"ID":"06fa4a8e-8c8a-4317-a695-7430ccad4dea","Type":"ContainerStarted","Data":"23c66d0524e420813dc5d568d910100ff045fd778a3b32dbb4577762161458c0"} Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.223574 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhx9b"] Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.227341 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.230005 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.234948 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhx9b"] Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.373255 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-catalog-content\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.373336 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-utilities\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.373749 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkr2v\" (UniqueName: \"kubernetes.io/projected/fc89a2f7-851b-473a-818c-db718947d490-kube-api-access-wkr2v\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.475354 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkr2v\" (UniqueName: \"kubernetes.io/projected/fc89a2f7-851b-473a-818c-db718947d490-kube-api-access-wkr2v\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.475428 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-catalog-content\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.475460 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-utilities\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.475965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-utilities\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.476146 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-catalog-content\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.502098 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkr2v\" (UniqueName: \"kubernetes.io/projected/fc89a2f7-851b-473a-818c-db718947d490-kube-api-access-wkr2v\") pod \"community-operators-mhx9b\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.551176 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.826813 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9t876"] Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.833002 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.835766 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.838404 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t876"] Feb 16 12:38:12 crc kubenswrapper[4799]: W0216 12:38:12.872526 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc89a2f7_851b_473a_818c_db718947d490.slice/crio-d438c7c93c31b3cd75623f3d254d559f41dcc190534e4257c2cb2671eb142453 WatchSource:0}: Error finding container d438c7c93c31b3cd75623f3d254d559f41dcc190534e4257c2cb2671eb142453: Status 404 returned error can't find the container with id d438c7c93c31b3cd75623f3d254d559f41dcc190534e4257c2cb2671eb142453 Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.874829 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhx9b"] Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.883289 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347ac568-46b1-4360-90fb-22d726ea9ab5-utilities\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.883348 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8th\" (UniqueName: \"kubernetes.io/projected/347ac568-46b1-4360-90fb-22d726ea9ab5-kube-api-access-6h8th\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.883509 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347ac568-46b1-4360-90fb-22d726ea9ab5-catalog-content\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.985071 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8th\" (UniqueName: \"kubernetes.io/projected/347ac568-46b1-4360-90fb-22d726ea9ab5-kube-api-access-6h8th\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.985203 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347ac568-46b1-4360-90fb-22d726ea9ab5-catalog-content\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.985299 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347ac568-46b1-4360-90fb-22d726ea9ab5-utilities\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.985782 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347ac568-46b1-4360-90fb-22d726ea9ab5-catalog-content\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:12 crc kubenswrapper[4799]: I0216 12:38:12.985843 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347ac568-46b1-4360-90fb-22d726ea9ab5-utilities\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.006658 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8th\" (UniqueName: \"kubernetes.io/projected/347ac568-46b1-4360-90fb-22d726ea9ab5-kube-api-access-6h8th\") pod \"redhat-marketplace-9t876\" (UID: \"347ac568-46b1-4360-90fb-22d726ea9ab5\") " pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.159179 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.365705 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t876"] Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.409088 4799 generic.go:334] "Generic (PLEG): container finished" podID="fc89a2f7-851b-473a-818c-db718947d490" containerID="0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7" exitCode=0 Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.409203 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhx9b" event={"ID":"fc89a2f7-851b-473a-818c-db718947d490","Type":"ContainerDied","Data":"0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7"} Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.409264 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhx9b" event={"ID":"fc89a2f7-851b-473a-818c-db718947d490","Type":"ContainerStarted","Data":"d438c7c93c31b3cd75623f3d254d559f41dcc190534e4257c2cb2671eb142453"} Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.411190 4799 generic.go:334] "Generic (PLEG): container finished" podID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerID="2444c82e4c6252f03f69b26b2fb14fb39dfecd147153b07aabe046ad48801351" exitCode=0 Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.411216 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxfqt" event={"ID":"d89b19db-d98a-4004-b73c-8bb54ddf632d","Type":"ContainerDied","Data":"2444c82e4c6252f03f69b26b2fb14fb39dfecd147153b07aabe046ad48801351"} Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.414984 4799 generic.go:334] "Generic (PLEG): container finished" podID="06fa4a8e-8c8a-4317-a695-7430ccad4dea" containerID="7c2bbd8838278d39ad5daf65cc8a3d1173328fa4943b3f61d4380daf43bfff94" exitCode=0 Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.415078 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl8v2" event={"ID":"06fa4a8e-8c8a-4317-a695-7430ccad4dea","Type":"ContainerDied","Data":"7c2bbd8838278d39ad5daf65cc8a3d1173328fa4943b3f61d4380daf43bfff94"} Feb 16 12:38:13 crc kubenswrapper[4799]: I0216 12:38:13.418171 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t876" event={"ID":"347ac568-46b1-4360-90fb-22d726ea9ab5","Type":"ContainerStarted","Data":"efaac52c9aa75950ecc8080796e76f1224cd0a1d78fec9841e2459895af17656"} Feb 16 12:38:14 crc kubenswrapper[4799]: I0216 12:38:14.431294 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl8v2" event={"ID":"06fa4a8e-8c8a-4317-a695-7430ccad4dea","Type":"ContainerStarted","Data":"dcc5e7f7bf69ae5f7823fda1ae825fafe0dd4e4c95c27bf3c6fb2c94684ea208"} Feb 16 12:38:14 crc kubenswrapper[4799]: I0216 12:38:14.436636 4799 generic.go:334] "Generic (PLEG): container finished" podID="347ac568-46b1-4360-90fb-22d726ea9ab5" containerID="f41cdd592dfa738232236e4eb97f8d0dd187a2c5ec8586d2dbd27ec5f0c314ec" exitCode=0 Feb 16 12:38:14 crc kubenswrapper[4799]: I0216 12:38:14.436712 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t876" event={"ID":"347ac568-46b1-4360-90fb-22d726ea9ab5","Type":"ContainerDied","Data":"f41cdd592dfa738232236e4eb97f8d0dd187a2c5ec8586d2dbd27ec5f0c314ec"} Feb 16 12:38:15 crc kubenswrapper[4799]: I0216 12:38:15.447393 4799 generic.go:334] "Generic (PLEG): container finished" podID="fc89a2f7-851b-473a-818c-db718947d490" containerID="8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc" exitCode=0 Feb 16 12:38:15 crc kubenswrapper[4799]: I0216 12:38:15.447478 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhx9b" event={"ID":"fc89a2f7-851b-473a-818c-db718947d490","Type":"ContainerDied","Data":"8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc"} Feb 16 12:38:15 crc kubenswrapper[4799]: I0216 12:38:15.451013 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxfqt" event={"ID":"d89b19db-d98a-4004-b73c-8bb54ddf632d","Type":"ContainerStarted","Data":"31a1a63daeefecc31f77e6170ebb5b05a0576d67fb9f77b40f3d7ac483c76292"} Feb 16 12:38:15 crc kubenswrapper[4799]: I0216 12:38:15.490163 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bl8v2" podStartSLOduration=3.721946758 podStartE2EDuration="6.490140495s" podCreationTimestamp="2026-02-16 12:38:09 +0000 UTC" firstStartedPulling="2026-02-16 12:38:11.387356156 +0000 UTC m=+396.980371500" lastFinishedPulling="2026-02-16 12:38:14.155549903 +0000 UTC m=+399.748565237" observedRunningTime="2026-02-16 12:38:15.48818684 +0000 UTC m=+401.081202174" watchObservedRunningTime="2026-02-16 12:38:15.490140495 +0000 UTC m=+401.083155829" Feb 16 12:38:15 crc kubenswrapper[4799]: I0216 12:38:15.517494 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lxfqt" podStartSLOduration=2.877474387 podStartE2EDuration="5.517466013s" podCreationTimestamp="2026-02-16 12:38:10 +0000 UTC" firstStartedPulling="2026-02-16 12:38:11.385103163 +0000 UTC m=+396.978118497" lastFinishedPulling="2026-02-16 12:38:14.025094799 +0000 UTC m=+399.618110123" observedRunningTime="2026-02-16 12:38:15.517096354 +0000 UTC m=+401.110111688" watchObservedRunningTime="2026-02-16 12:38:15.517466013 +0000 UTC m=+401.110481347" Feb 16 12:38:16 crc kubenswrapper[4799]: I0216 12:38:16.460604 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhx9b" event={"ID":"fc89a2f7-851b-473a-818c-db718947d490","Type":"ContainerStarted","Data":"37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00"} Feb 16 12:38:16 crc kubenswrapper[4799]: I0216 12:38:16.463234 4799 generic.go:334] "Generic (PLEG): container finished" podID="347ac568-46b1-4360-90fb-22d726ea9ab5" containerID="702535c5fd42eafea284eb8740c3049283f9b3d60ffe51c1fe67b99a92286751" exitCode=0 Feb 16 12:38:16 crc kubenswrapper[4799]: I0216 12:38:16.463323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t876" event={"ID":"347ac568-46b1-4360-90fb-22d726ea9ab5","Type":"ContainerDied","Data":"702535c5fd42eafea284eb8740c3049283f9b3d60ffe51c1fe67b99a92286751"} Feb 16 12:38:16 crc kubenswrapper[4799]: I0216 12:38:16.490509 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhx9b" podStartSLOduration=1.8229075209999999 podStartE2EDuration="4.490476209s" podCreationTimestamp="2026-02-16 12:38:12 +0000 UTC" firstStartedPulling="2026-02-16 12:38:13.41061985 +0000 UTC m=+399.003635204" lastFinishedPulling="2026-02-16 12:38:16.078188558 +0000 UTC m=+401.671203892" observedRunningTime="2026-02-16 12:38:16.479368349 +0000 UTC m=+402.072383693" watchObservedRunningTime="2026-02-16 12:38:16.490476209 +0000 UTC m=+402.083491553" Feb 16 12:38:17 crc kubenswrapper[4799]: I0216 12:38:17.471817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t876" event={"ID":"347ac568-46b1-4360-90fb-22d726ea9ab5","Type":"ContainerStarted","Data":"b09a83376471e398766e5047c2887bbc70ee35dba0765c76e5144c88cf1ed975"} Feb 16 12:38:17 crc kubenswrapper[4799]: I0216 12:38:17.495861 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9t876" podStartSLOduration=3.046627607 podStartE2EDuration="5.495839399s" podCreationTimestamp="2026-02-16 12:38:12 +0000 UTC" firstStartedPulling="2026-02-16 12:38:14.651187149 +0000 UTC m=+400.244202483" lastFinishedPulling="2026-02-16 12:38:17.100398941 +0000 UTC m=+402.693414275" observedRunningTime="2026-02-16 12:38:17.493811132 +0000 UTC m=+403.086826546" watchObservedRunningTime="2026-02-16 12:38:17.495839399 +0000 UTC m=+403.088854733" Feb 16 12:38:20 crc kubenswrapper[4799]: I0216 12:38:20.173590 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:20 crc kubenswrapper[4799]: I0216 12:38:20.174433 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:20 crc kubenswrapper[4799]: I0216 12:38:20.795056 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:20 crc kubenswrapper[4799]: I0216 12:38:20.795717 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:20 crc kubenswrapper[4799]: I0216 12:38:20.850004 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:21 crc kubenswrapper[4799]: I0216 12:38:21.236012 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bl8v2" podUID="06fa4a8e-8c8a-4317-a695-7430ccad4dea" containerName="registry-server" probeResult="failure" output=< Feb 16 12:38:21 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:38:21 crc kubenswrapper[4799]: > Feb 16 12:38:21 crc kubenswrapper[4799]: I0216 12:38:21.559254 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:38:21 crc kubenswrapper[4799]: I0216 12:38:21.793391 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:38:21 crc kubenswrapper[4799]: I0216 12:38:21.793477 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:38:22 crc kubenswrapper[4799]: I0216 12:38:22.560821 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:22 crc kubenswrapper[4799]: I0216 12:38:22.561414 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:22 crc kubenswrapper[4799]: I0216 12:38:22.642285 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:23 crc kubenswrapper[4799]: I0216 12:38:23.159544 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:23 crc kubenswrapper[4799]: I0216 12:38:23.159589 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:23 crc kubenswrapper[4799]: I0216 12:38:23.208688 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:23 crc kubenswrapper[4799]: I0216 12:38:23.573081 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 12:38:23 crc kubenswrapper[4799]: I0216 12:38:23.575978 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9t876" Feb 16 12:38:24 crc kubenswrapper[4799]: I0216 12:38:24.481407 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xl79d" Feb 16 12:38:24 crc kubenswrapper[4799]: I0216 12:38:24.576923 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-df4xr"] Feb 16 12:38:30 crc kubenswrapper[4799]: I0216 12:38:30.216695 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:30 crc kubenswrapper[4799]: I0216 12:38:30.277525 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bl8v2" Feb 16 12:38:49 crc kubenswrapper[4799]: I0216 12:38:49.626072 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" podUID="67094e0b-8edb-4b4f-aed3-a704b0854384" containerName="registry" containerID="cri-o://b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf" gracePeriod=30 Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.003798 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125003 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67094e0b-8edb-4b4f-aed3-a704b0854384-ca-trust-extracted\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125088 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-bound-sa-token\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125136 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-certificates\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125425 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125496 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67094e0b-8edb-4b4f-aed3-a704b0854384-installation-pull-secrets\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125580 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9fqz\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-kube-api-access-p9fqz\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125603 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-tls\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.125633 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-trusted-ca\") pod \"67094e0b-8edb-4b4f-aed3-a704b0854384\" (UID: \"67094e0b-8edb-4b4f-aed3-a704b0854384\") " Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.126872 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.128534 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.134406 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-kube-api-access-p9fqz" (OuterVolumeSpecName: "kube-api-access-p9fqz") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "kube-api-access-p9fqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.134827 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67094e0b-8edb-4b4f-aed3-a704b0854384-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.135510 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.143296 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.145357 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.156663 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67094e0b-8edb-4b4f-aed3-a704b0854384-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "67094e0b-8edb-4b4f-aed3-a704b0854384" (UID: "67094e0b-8edb-4b4f-aed3-a704b0854384"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227604 4799 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67094e0b-8edb-4b4f-aed3-a704b0854384-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227661 4799 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227682 4799 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227706 4799 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67094e0b-8edb-4b4f-aed3-a704b0854384-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227724 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9fqz\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-kube-api-access-p9fqz\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227746 4799 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67094e0b-8edb-4b4f-aed3-a704b0854384-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.227762 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67094e0b-8edb-4b4f-aed3-a704b0854384-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.718437 4799 generic.go:334] "Generic (PLEG): container finished" podID="67094e0b-8edb-4b4f-aed3-a704b0854384" containerID="b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf" exitCode=0 Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.719247 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" event={"ID":"67094e0b-8edb-4b4f-aed3-a704b0854384","Type":"ContainerDied","Data":"b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf"} Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.719321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" event={"ID":"67094e0b-8edb-4b4f-aed3-a704b0854384","Type":"ContainerDied","Data":"e13328f6fa153aa7162850c2532a7466a6784c89214dd3062825510645856c68"} Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.719366 4799 scope.go:117] "RemoveContainer" containerID="b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.719994 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-df4xr" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.752916 4799 scope.go:117] "RemoveContainer" containerID="b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf" Feb 16 12:38:50 crc kubenswrapper[4799]: E0216 12:38:50.753971 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf\": container with ID starting with b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf not found: ID does not exist" containerID="b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.754391 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf"} err="failed to get container status \"b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf\": rpc error: code = NotFound desc = could not find container \"b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf\": container with ID starting with b8ac8b0a37adfebaf53c4c9c593a7e4eacaf8f7f1d92114f762b1360b84cc6cf not found: ID does not exist" Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.777377 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-df4xr"] Feb 16 12:38:50 crc kubenswrapper[4799]: I0216 12:38:50.781301 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-df4xr"] Feb 16 12:38:51 crc kubenswrapper[4799]: I0216 12:38:51.161897 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67094e0b-8edb-4b4f-aed3-a704b0854384" path="/var/lib/kubelet/pods/67094e0b-8edb-4b4f-aed3-a704b0854384/volumes" Feb 16 12:38:51 crc kubenswrapper[4799]: I0216 12:38:51.793886 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:38:51 crc kubenswrapper[4799]: I0216 12:38:51.794006 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:38:51 crc kubenswrapper[4799]: I0216 12:38:51.794192 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:38:51 crc kubenswrapper[4799]: I0216 12:38:51.795261 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99ae92538ccb5394a598414e9620dd6f3da82af389aa189751d9526a42ca1516"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:38:51 crc kubenswrapper[4799]: I0216 12:38:51.795379 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://99ae92538ccb5394a598414e9620dd6f3da82af389aa189751d9526a42ca1516" gracePeriod=600 Feb 16 12:38:52 crc kubenswrapper[4799]: I0216 12:38:52.739479 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="99ae92538ccb5394a598414e9620dd6f3da82af389aa189751d9526a42ca1516" exitCode=0 Feb 16 12:38:52 crc kubenswrapper[4799]: I0216 12:38:52.739577 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"99ae92538ccb5394a598414e9620dd6f3da82af389aa189751d9526a42ca1516"} Feb 16 12:38:52 crc kubenswrapper[4799]: I0216 12:38:52.740111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"ba06c19342df98d380d31640088ece96cb12ba32a0f9050891bda640b4c7c600"} Feb 16 12:38:52 crc kubenswrapper[4799]: I0216 12:38:52.740169 4799 scope.go:117] "RemoveContainer" containerID="09af10fc4cb126350de739d51aed9cda694ae6a05bf6a757731e4f9a9841d8cf" Feb 16 12:40:35 crc kubenswrapper[4799]: I0216 12:40:35.421920 4799 scope.go:117] "RemoveContainer" containerID="6ea1c32423ae94cb1936bb4a541e60f2b4ca6f6b792b5af8b1b01b2a731a08df" Feb 16 12:40:35 crc kubenswrapper[4799]: I0216 12:40:35.451932 4799 scope.go:117] "RemoveContainer" containerID="fda4872590e9956393bc29d7b49a0aaa50db46d4aa6b7ba663e882b3770dd433" Feb 16 12:41:21 crc kubenswrapper[4799]: I0216 12:41:21.793252 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:41:21 crc kubenswrapper[4799]: I0216 12:41:21.794326 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:41:35 crc kubenswrapper[4799]: I0216 12:41:35.501211 4799 scope.go:117] "RemoveContainer" containerID="9214ed7439fc51c805423078563e24039276b5ac13330c567f3871332ab3dee5" Feb 16 12:41:35 crc kubenswrapper[4799]: I0216 12:41:35.539793 4799 scope.go:117] "RemoveContainer" containerID="e36753c15a934e39445060934bdf3ccabe515ea921f08a19086ebf353adae8a0" Feb 16 12:41:51 crc kubenswrapper[4799]: I0216 12:41:51.793416 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:41:51 crc kubenswrapper[4799]: I0216 12:41:51.794589 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:42:21 crc kubenswrapper[4799]: I0216 12:42:21.793483 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:42:21 crc kubenswrapper[4799]: I0216 12:42:21.794749 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:42:21 crc kubenswrapper[4799]: I0216 12:42:21.794842 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:42:21 crc kubenswrapper[4799]: I0216 12:42:21.795763 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba06c19342df98d380d31640088ece96cb12ba32a0f9050891bda640b4c7c600"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:42:21 crc kubenswrapper[4799]: I0216 12:42:21.795853 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://ba06c19342df98d380d31640088ece96cb12ba32a0f9050891bda640b4c7c600" gracePeriod=600 Feb 16 12:42:22 crc kubenswrapper[4799]: I0216 12:42:22.747680 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="ba06c19342df98d380d31640088ece96cb12ba32a0f9050891bda640b4c7c600" exitCode=0 Feb 16 12:42:22 crc kubenswrapper[4799]: I0216 12:42:22.747827 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"ba06c19342df98d380d31640088ece96cb12ba32a0f9050891bda640b4c7c600"} Feb 16 12:42:22 crc kubenswrapper[4799]: I0216 12:42:22.748691 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"86245d72136a5128ea7329ec812aaf474d9f9a0b7cefc3d679dd266cf69dce8f"} Feb 16 12:42:22 crc kubenswrapper[4799]: I0216 12:42:22.748769 4799 scope.go:117] "RemoveContainer" containerID="99ae92538ccb5394a598414e9620dd6f3da82af389aa189751d9526a42ca1516" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.076563 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb"] Feb 16 12:43:18 crc kubenswrapper[4799]: E0216 12:43:18.077546 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67094e0b-8edb-4b4f-aed3-a704b0854384" containerName="registry" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.077567 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="67094e0b-8edb-4b4f-aed3-a704b0854384" containerName="registry" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.077740 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="67094e0b-8edb-4b4f-aed3-a704b0854384" containerName="registry" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.078321 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.085823 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-hcks5"] Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.086944 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hcks5" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.097745 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.098011 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9kvcq" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.098379 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4qhdk" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.098472 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.100742 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb"] Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.109617 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hcks5"] Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.113720 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-p9txt"] Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.114567 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.122888 4799 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-k69mv" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.131029 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-p9txt"] Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.172996 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl9d\" (UniqueName: \"kubernetes.io/projected/d2d7275d-595b-44d8-afc7-8df5bb4b8e18-kube-api-access-lbl9d\") pod \"cert-manager-cainjector-cf98fcc89-kwbcb\" (UID: \"d2d7275d-595b-44d8-afc7-8df5bb4b8e18\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.173117 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdhf\" (UniqueName: \"kubernetes.io/projected/75520423-f121-446d-8ad2-d0bfc440fd76-kube-api-access-bkdhf\") pod \"cert-manager-webhook-687f57d79b-p9txt\" (UID: \"75520423-f121-446d-8ad2-d0bfc440fd76\") " pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.173238 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ltr\" (UniqueName: \"kubernetes.io/projected/4ce49784-a833-4d3a-8101-9618730dd5c7-kube-api-access-56ltr\") pod \"cert-manager-858654f9db-hcks5\" (UID: \"4ce49784-a833-4d3a-8101-9618730dd5c7\") " pod="cert-manager/cert-manager-858654f9db-hcks5" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.274034 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl9d\" (UniqueName: \"kubernetes.io/projected/d2d7275d-595b-44d8-afc7-8df5bb4b8e18-kube-api-access-lbl9d\") pod \"cert-manager-cainjector-cf98fcc89-kwbcb\" (UID: \"d2d7275d-595b-44d8-afc7-8df5bb4b8e18\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.274073 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdhf\" (UniqueName: \"kubernetes.io/projected/75520423-f121-446d-8ad2-d0bfc440fd76-kube-api-access-bkdhf\") pod \"cert-manager-webhook-687f57d79b-p9txt\" (UID: \"75520423-f121-446d-8ad2-d0bfc440fd76\") " pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.274155 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ltr\" (UniqueName: \"kubernetes.io/projected/4ce49784-a833-4d3a-8101-9618730dd5c7-kube-api-access-56ltr\") pod \"cert-manager-858654f9db-hcks5\" (UID: \"4ce49784-a833-4d3a-8101-9618730dd5c7\") " pod="cert-manager/cert-manager-858654f9db-hcks5" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.296999 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl9d\" (UniqueName: \"kubernetes.io/projected/d2d7275d-595b-44d8-afc7-8df5bb4b8e18-kube-api-access-lbl9d\") pod \"cert-manager-cainjector-cf98fcc89-kwbcb\" (UID: \"d2d7275d-595b-44d8-afc7-8df5bb4b8e18\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.298956 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdhf\" (UniqueName: \"kubernetes.io/projected/75520423-f121-446d-8ad2-d0bfc440fd76-kube-api-access-bkdhf\") pod \"cert-manager-webhook-687f57d79b-p9txt\" (UID: \"75520423-f121-446d-8ad2-d0bfc440fd76\") " pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.301901 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ltr\" (UniqueName: \"kubernetes.io/projected/4ce49784-a833-4d3a-8101-9618730dd5c7-kube-api-access-56ltr\") pod \"cert-manager-858654f9db-hcks5\" (UID: \"4ce49784-a833-4d3a-8101-9618730dd5c7\") " pod="cert-manager/cert-manager-858654f9db-hcks5" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.398770 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.407106 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hcks5" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.429629 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.678935 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb"] Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.689363 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:43:18 crc kubenswrapper[4799]: W0216 12:43:18.951836 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75520423_f121_446d_8ad2_d0bfc440fd76.slice/crio-2ca063cc84efa8c3ac523b85f7b8f2393badd03ab0ee12f253c2dd83947c01b3 WatchSource:0}: Error finding container 2ca063cc84efa8c3ac523b85f7b8f2393badd03ab0ee12f253c2dd83947c01b3: Status 404 returned error can't find the container with id 2ca063cc84efa8c3ac523b85f7b8f2393badd03ab0ee12f253c2dd83947c01b3 Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.952527 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-p9txt"] Feb 16 12:43:18 crc kubenswrapper[4799]: W0216 12:43:18.954876 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce49784_a833_4d3a_8101_9618730dd5c7.slice/crio-6cbea7adaa900835f9720e243f5475cea377cc3b189fbd5e47b59e35e6c54060 WatchSource:0}: Error finding container 6cbea7adaa900835f9720e243f5475cea377cc3b189fbd5e47b59e35e6c54060: Status 404 returned error can't find the container with id 6cbea7adaa900835f9720e243f5475cea377cc3b189fbd5e47b59e35e6c54060 Feb 16 12:43:18 crc kubenswrapper[4799]: I0216 12:43:18.955430 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hcks5"] Feb 16 12:43:19 crc kubenswrapper[4799]: I0216 12:43:19.482762 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hcks5" event={"ID":"4ce49784-a833-4d3a-8101-9618730dd5c7","Type":"ContainerStarted","Data":"6cbea7adaa900835f9720e243f5475cea377cc3b189fbd5e47b59e35e6c54060"} Feb 16 12:43:19 crc kubenswrapper[4799]: I0216 12:43:19.483833 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" event={"ID":"75520423-f121-446d-8ad2-d0bfc440fd76","Type":"ContainerStarted","Data":"2ca063cc84efa8c3ac523b85f7b8f2393badd03ab0ee12f253c2dd83947c01b3"} Feb 16 12:43:19 crc kubenswrapper[4799]: I0216 12:43:19.484897 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" event={"ID":"d2d7275d-595b-44d8-afc7-8df5bb4b8e18","Type":"ContainerStarted","Data":"348954eecd15e3e676d5ce9b80a90f74caa3ee6408c428d15c8dfb67ce470e2e"} Feb 16 12:43:21 crc kubenswrapper[4799]: I0216 12:43:21.498835 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" event={"ID":"d2d7275d-595b-44d8-afc7-8df5bb4b8e18","Type":"ContainerStarted","Data":"7282998ea0dbdbabc7e85b9d3aa5111a3b8d81ab6c0e7bd188c07d31a7c0257a"} Feb 16 12:43:21 crc kubenswrapper[4799]: I0216 12:43:21.521652 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kwbcb" podStartSLOduration=1.179998264 podStartE2EDuration="3.521629057s" podCreationTimestamp="2026-02-16 12:43:18 +0000 UTC" firstStartedPulling="2026-02-16 12:43:18.689061249 +0000 UTC m=+704.282076583" lastFinishedPulling="2026-02-16 12:43:21.030692042 +0000 UTC m=+706.623707376" observedRunningTime="2026-02-16 12:43:21.516882027 +0000 UTC m=+707.109897371" watchObservedRunningTime="2026-02-16 12:43:21.521629057 +0000 UTC m=+707.114644391" Feb 16 12:43:23 crc kubenswrapper[4799]: I0216 12:43:23.514593 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" event={"ID":"75520423-f121-446d-8ad2-d0bfc440fd76","Type":"ContainerStarted","Data":"57b0ffcc21735014fa124d1e60f131f3d560017230ac440f607c457bcd2dc264"} Feb 16 12:43:23 crc kubenswrapper[4799]: I0216 12:43:23.515401 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:23 crc kubenswrapper[4799]: I0216 12:43:23.517264 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hcks5" event={"ID":"4ce49784-a833-4d3a-8101-9618730dd5c7","Type":"ContainerStarted","Data":"b010ef4a8b8cde25a2d51929144fe6098f751a074288ebdf98ec10c70a360ddd"} Feb 16 12:43:23 crc kubenswrapper[4799]: I0216 12:43:23.551973 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" podStartSLOduration=1.687018653 podStartE2EDuration="5.551936593s" podCreationTimestamp="2026-02-16 12:43:18 +0000 UTC" firstStartedPulling="2026-02-16 12:43:18.955297144 +0000 UTC m=+704.548312478" lastFinishedPulling="2026-02-16 12:43:22.820215074 +0000 UTC m=+708.413230418" observedRunningTime="2026-02-16 12:43:23.547591483 +0000 UTC m=+709.140606857" watchObservedRunningTime="2026-02-16 12:43:23.551936593 +0000 UTC m=+709.144951957" Feb 16 12:43:23 crc kubenswrapper[4799]: I0216 12:43:23.584390 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-hcks5" podStartSLOduration=1.712942697 podStartE2EDuration="5.584355666s" podCreationTimestamp="2026-02-16 12:43:18 +0000 UTC" firstStartedPulling="2026-02-16 12:43:18.957011331 +0000 UTC m=+704.550026665" lastFinishedPulling="2026-02-16 12:43:22.82842429 +0000 UTC m=+708.421439634" observedRunningTime="2026-02-16 12:43:23.584037487 +0000 UTC m=+709.177052861" watchObservedRunningTime="2026-02-16 12:43:23.584355666 +0000 UTC m=+709.177371030" Feb 16 12:43:28 crc kubenswrapper[4799]: I0216 12:43:28.434342 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-p9txt" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.269382 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzcq6"] Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.270967 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-controller" containerID="cri-o://c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.271047 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.271165 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="northd" containerID="cri-o://7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.271239 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-acl-logging" containerID="cri-o://51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.271240 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-node" containerID="cri-o://751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.271056 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="nbdb" containerID="cri-o://a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.271314 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="sbdb" containerID="cri-o://6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.333779 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" containerID="cri-o://b18518e791edc1176d193a389ef0578150e5064b7dbc957b4b036bceffdd11c2" gracePeriod=30 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.620211 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/2.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.621047 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/1.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.621159 4799 generic.go:334] "Generic (PLEG): container finished" podID="ff442c08-09db-4354-b9be-b43956019ba7" containerID="159c40eee1999c836def11b49d0de21c643e5b9140ecb4fc62683775c8af77a9" exitCode=2 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.621265 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerDied","Data":"159c40eee1999c836def11b49d0de21c643e5b9140ecb4fc62683775c8af77a9"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.621332 4799 scope.go:117] "RemoveContainer" containerID="c955bcb20ad6aa1eb1511fb22a974c9a2614341aabae1a0041d80767d65e8d98" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.622178 4799 scope.go:117] "RemoveContainer" containerID="159c40eee1999c836def11b49d0de21c643e5b9140ecb4fc62683775c8af77a9" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.622575 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7j77r_openshift-multus(ff442c08-09db-4354-b9be-b43956019ba7)\"" pod="openshift-multus/multus-7j77r" podUID="ff442c08-09db-4354-b9be-b43956019ba7" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.629258 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/3.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.632367 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovn-acl-logging/0.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633039 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovn-controller/0.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633590 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="b18518e791edc1176d193a389ef0578150e5064b7dbc957b4b036bceffdd11c2" exitCode=0 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633631 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd" exitCode=0 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633639 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f" exitCode=0 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633646 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e" exitCode=0 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633654 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782" exitCode=0 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633662 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7" exitCode=0 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633672 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba" exitCode=143 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633666 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"b18518e791edc1176d193a389ef0578150e5064b7dbc957b4b036bceffdd11c2"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633716 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633734 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633749 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633763 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633777 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633789 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633805 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6"} Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.633681 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerID="c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6" exitCode=143 Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.656160 4799 scope.go:117] "RemoveContainer" containerID="0bda43d860c40661eeab85d57412a0caade21f6670c8d8a642e35424d6156c10" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.656824 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovnkube-controller/3.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.661433 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovn-acl-logging/0.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.662288 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovn-controller/0.log" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.663627 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668743 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-systemd\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668785 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668811 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-netns\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668889 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-openvswitch\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668953 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ae13b0a-1f69-476d-a552-4467fcedac14-ovn-node-metrics-cert\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668973 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-node-log\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.668995 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-bin\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669036 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-ovn\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669056 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-env-overrides\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669071 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-etc-openvswitch\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669092 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-kubelet\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669110 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-netd\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669158 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-ovn-kubernetes\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669178 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-log-socket\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcvk2\" (UniqueName: \"kubernetes.io/projected/8ae13b0a-1f69-476d-a552-4467fcedac14-kube-api-access-mcvk2\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669225 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-script-lib\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669261 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-config\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669289 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-systemd-units\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669338 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-var-lib-openvswitch\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669372 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-slash\") pod \"8ae13b0a-1f69-476d-a552-4467fcedac14\" (UID: \"8ae13b0a-1f69-476d-a552-4467fcedac14\") " Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669946 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669988 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.669955 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670026 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670026 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670070 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670061 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670139 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670146 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-slash" (OuterVolumeSpecName: "host-slash") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670178 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670223 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-log-socket" (OuterVolumeSpecName: "log-socket") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670258 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-node-log" (OuterVolumeSpecName: "node-log") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670320 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670649 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.670668 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.671001 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.676951 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae13b0a-1f69-476d-a552-4467fcedac14-kube-api-access-mcvk2" (OuterVolumeSpecName: "kube-api-access-mcvk2") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "kube-api-access-mcvk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.677010 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae13b0a-1f69-476d-a552-4467fcedac14-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.687245 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8ae13b0a-1f69-476d-a552-4467fcedac14" (UID: "8ae13b0a-1f69-476d-a552-4467fcedac14"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726451 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pgspl"] Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726702 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726752 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726772 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726783 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726794 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-node" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726802 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-node" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726809 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="northd" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726816 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="northd" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726824 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726830 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726839 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="sbdb" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726846 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="sbdb" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726860 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kubecfg-setup" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726867 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kubecfg-setup" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726878 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726887 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726899 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="nbdb" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726905 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="nbdb" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726912 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-acl-logging" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726918 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-acl-logging" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.726927 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.726933 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727028 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-acl-logging" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727039 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727047 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="northd" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727056 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-node" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727064 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727070 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovn-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727078 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727088 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727096 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="sbdb" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727104 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="nbdb" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.727221 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727229 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: E0216 12:43:36.727238 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727246 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727331 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.727345 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" containerName="ovnkube-controller" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.729270 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770523 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-cni-bin\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770604 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-log-socket\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770653 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-ovnkube-config\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770687 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-systemd-units\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770718 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-env-overrides\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770802 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.770994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-slash\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771083 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-systemd\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771117 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-var-lib-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771212 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-ovnkube-script-lib\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771292 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-cni-netd\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771348 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-run-netns\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771374 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-node-log\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771478 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-etc-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771525 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6fee452-88b5-4e67-a712-03f61a051f5f-ovn-node-metrics-cert\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771551 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771612 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771667 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhkr\" (UniqueName: \"kubernetes.io/projected/e6fee452-88b5-4e67-a712-03f61a051f5f-kube-api-access-8bhkr\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771713 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-kubelet\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.771786 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-ovn\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772059 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ae13b0a-1f69-476d-a552-4467fcedac14-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772091 4799 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772112 4799 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772173 4799 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772192 4799 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772211 4799 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772233 4799 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772252 4799 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772271 4799 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772292 4799 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772310 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcvk2\" (UniqueName: \"kubernetes.io/projected/8ae13b0a-1f69-476d-a552-4467fcedac14-kube-api-access-mcvk2\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772329 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772347 4799 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ae13b0a-1f69-476d-a552-4467fcedac14-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772365 4799 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772384 4799 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772402 4799 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772420 4799 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772438 4799 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772460 4799 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.772479 4799 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ae13b0a-1f69-476d-a552-4467fcedac14-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874501 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-ovn\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-cni-bin\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874633 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-log-socket\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874703 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-ovnkube-config\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874718 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-ovn\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874822 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-log-socket\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874801 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-cni-bin\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874825 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-systemd-units\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874738 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-systemd-units\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.874994 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-env-overrides\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875041 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875095 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-slash\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875216 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-systemd\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875267 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-var-lib-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875308 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-slash\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875339 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-ovnkube-script-lib\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-systemd\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875394 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-cni-netd\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-run-netns\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875504 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-cni-netd\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875557 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-node-log\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875450 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875543 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-var-lib-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875515 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-node-log\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875675 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-run-netns\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875753 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-etc-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6fee452-88b5-4e67-a712-03f61a051f5f-ovn-node-metrics-cert\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875847 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-etc-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.875940 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876010 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bhkr\" (UniqueName: \"kubernetes.io/projected/e6fee452-88b5-4e67-a712-03f61a051f5f-kube-api-access-8bhkr\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876043 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-run-openvswitch\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876086 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-kubelet\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876096 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876257 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6fee452-88b5-4e67-a712-03f61a051f5f-host-kubelet\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876331 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-env-overrides\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.876558 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-ovnkube-config\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.877444 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6fee452-88b5-4e67-a712-03f61a051f5f-ovnkube-script-lib\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.881186 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6fee452-88b5-4e67-a712-03f61a051f5f-ovn-node-metrics-cert\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:36 crc kubenswrapper[4799]: I0216 12:43:36.906497 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bhkr\" (UniqueName: \"kubernetes.io/projected/e6fee452-88b5-4e67-a712-03f61a051f5f-kube-api-access-8bhkr\") pod \"ovnkube-node-pgspl\" (UID: \"e6fee452-88b5-4e67-a712-03f61a051f5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.059281 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:37 crc kubenswrapper[4799]: W0216 12:43:37.094645 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6fee452_88b5_4e67_a712_03f61a051f5f.slice/crio-0588548d4b4c2d4c8d0a9ed3ffb598f080b379f2857b4977984cbcb8d333e96b WatchSource:0}: Error finding container 0588548d4b4c2d4c8d0a9ed3ffb598f080b379f2857b4977984cbcb8d333e96b: Status 404 returned error can't find the container with id 0588548d4b4c2d4c8d0a9ed3ffb598f080b379f2857b4977984cbcb8d333e96b Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.648136 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovn-acl-logging/0.log" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.648920 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzcq6_8ae13b0a-1f69-476d-a552-4467fcedac14/ovn-controller/0.log" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.649789 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" event={"ID":"8ae13b0a-1f69-476d-a552-4467fcedac14","Type":"ContainerDied","Data":"ccdacabc2c0f599d71b956add2a5204bd979482321617ff9f5fd5d70407efb56"} Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.649834 4799 scope.go:117] "RemoveContainer" containerID="b18518e791edc1176d193a389ef0578150e5064b7dbc957b4b036bceffdd11c2" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.649904 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzcq6" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.653460 4799 generic.go:334] "Generic (PLEG): container finished" podID="e6fee452-88b5-4e67-a712-03f61a051f5f" containerID="0e44f4c5efd40bb8cbfa8bdc396e5cd110cf0fc7324ee7c6317f6f3114582e47" exitCode=0 Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.653576 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerDied","Data":"0e44f4c5efd40bb8cbfa8bdc396e5cd110cf0fc7324ee7c6317f6f3114582e47"} Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.653635 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"0588548d4b4c2d4c8d0a9ed3ffb598f080b379f2857b4977984cbcb8d333e96b"} Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.658527 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/2.log" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.687899 4799 scope.go:117] "RemoveContainer" containerID="6855c3c61a43777cb382e875f30feb018dbc584a520ca114317fc5456056e8fd" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.730406 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzcq6"] Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.737795 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzcq6"] Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.741538 4799 scope.go:117] "RemoveContainer" containerID="a72535bd07fdceee49af063e5eaf59b09286783adb25724365a1851ebe84357f" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.778156 4799 scope.go:117] "RemoveContainer" containerID="7437bb39d9107546f33c510ecf09ab92f6d2849ddc9dd4d4e303f7da4b7d2a0e" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.801563 4799 scope.go:117] "RemoveContainer" containerID="e01ea177e8f2ecc2da76a1ea90a07e1b8f6e5a7e6431ca82b49c79428fdad782" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.819016 4799 scope.go:117] "RemoveContainer" containerID="751c8fbe846639cb05f1607cb24c66c1cedbab001c6668aa3b055c6b309856e7" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.838602 4799 scope.go:117] "RemoveContainer" containerID="51a126b1eec7a4935149fd0c18a0111d07f2cbe8e3efe3819fc3634039cd21ba" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.865863 4799 scope.go:117] "RemoveContainer" containerID="c9d3abd2b73dd02e437a40d8b089a20235019e1127aaadc15426d26ec3dc45c6" Feb 16 12:43:37 crc kubenswrapper[4799]: I0216 12:43:37.900417 4799 scope.go:117] "RemoveContainer" containerID="ca407f9ac35fff926f03d199e658ba7a1f3e4f37b802ea9190a34bc17b762adc" Feb 16 12:43:38 crc kubenswrapper[4799]: I0216 12:43:38.673920 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"cf02164d7a144613f8d538b1d2c124653ab22da93f6230dc58c45541cec2f73d"} Feb 16 12:43:38 crc kubenswrapper[4799]: I0216 12:43:38.674561 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"2b2edd7e10a3293e1ea84aa8ad3157e17f3bb4414194c6feaf1ca85411ac1501"} Feb 16 12:43:38 crc kubenswrapper[4799]: I0216 12:43:38.674578 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"56618556a6c5a4e5f245bed8fadec9b125cf8baa3e55033595c984551b195d82"} Feb 16 12:43:38 crc kubenswrapper[4799]: I0216 12:43:38.674596 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"4f95ebb0717ff15a4132c50d91f55ff617fbb6576a6e00162eda5afb5e1235be"} Feb 16 12:43:38 crc kubenswrapper[4799]: I0216 12:43:38.674612 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"8781fcefea74c32a807d1c5d1446a412401c6baa6707ea6ea3fc583deab0211a"} Feb 16 12:43:38 crc kubenswrapper[4799]: I0216 12:43:38.674626 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"e7fa6569375b181481813456e05f552245ccd04c958509491a8d3fced01af72f"} Feb 16 12:43:39 crc kubenswrapper[4799]: I0216 12:43:39.162051 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae13b0a-1f69-476d-a552-4467fcedac14" path="/var/lib/kubelet/pods/8ae13b0a-1f69-476d-a552-4467fcedac14/volumes" Feb 16 12:43:41 crc kubenswrapper[4799]: I0216 12:43:41.703035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"43611cd7e85522d59d5139cc93b65982bc5360548ec69de2fdfb8cf88442bb1d"} Feb 16 12:43:43 crc kubenswrapper[4799]: I0216 12:43:43.719198 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" event={"ID":"e6fee452-88b5-4e67-a712-03f61a051f5f","Type":"ContainerStarted","Data":"b92b27f734dbc7e931e245c6ae0d5fcd72f97f712c1a3cd4a8b55090de79539e"} Feb 16 12:43:43 crc kubenswrapper[4799]: I0216 12:43:43.719658 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:43 crc kubenswrapper[4799]: I0216 12:43:43.719701 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:43 crc kubenswrapper[4799]: I0216 12:43:43.750549 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:43 crc kubenswrapper[4799]: I0216 12:43:43.798177 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" podStartSLOduration=7.798151982 podStartE2EDuration="7.798151982s" podCreationTimestamp="2026-02-16 12:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:43:43.761950205 +0000 UTC m=+729.354965539" watchObservedRunningTime="2026-02-16 12:43:43.798151982 +0000 UTC m=+729.391167306" Feb 16 12:43:44 crc kubenswrapper[4799]: I0216 12:43:44.739074 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:44 crc kubenswrapper[4799]: I0216 12:43:44.832953 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:43:50 crc kubenswrapper[4799]: I0216 12:43:50.150604 4799 scope.go:117] "RemoveContainer" containerID="159c40eee1999c836def11b49d0de21c643e5b9140ecb4fc62683775c8af77a9" Feb 16 12:43:50 crc kubenswrapper[4799]: I0216 12:43:50.788795 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7j77r_ff442c08-09db-4354-b9be-b43956019ba7/kube-multus/2.log" Feb 16 12:43:50 crc kubenswrapper[4799]: I0216 12:43:50.789260 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7j77r" event={"ID":"ff442c08-09db-4354-b9be-b43956019ba7","Type":"ContainerStarted","Data":"be07f850da7acc29a59b4ddf790128e0ba298910546ee5dcf4670f0b00ae55be"} Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.449776 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s"] Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.451682 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.455934 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.467087 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s"] Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.553724 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dcp\" (UniqueName: \"kubernetes.io/projected/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-kube-api-access-q4dcp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.553815 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.553859 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.655303 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.655448 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.655580 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4dcp\" (UniqueName: \"kubernetes.io/projected/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-kube-api-access-q4dcp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.656497 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.656539 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.685445 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4dcp\" (UniqueName: \"kubernetes.io/projected/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-kube-api-access-q4dcp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:58 crc kubenswrapper[4799]: I0216 12:43:58.772773 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:43:59 crc kubenswrapper[4799]: I0216 12:43:59.043974 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s"] Feb 16 12:43:59 crc kubenswrapper[4799]: I0216 12:43:59.867076 4799 generic.go:334] "Generic (PLEG): container finished" podID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerID="4d691b1786159985f8d5c84b1b5dec15756a2e0dc13179e1ea1245d4ed8a2ec8" exitCode=0 Feb 16 12:43:59 crc kubenswrapper[4799]: I0216 12:43:59.867199 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" event={"ID":"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa","Type":"ContainerDied","Data":"4d691b1786159985f8d5c84b1b5dec15756a2e0dc13179e1ea1245d4ed8a2ec8"} Feb 16 12:43:59 crc kubenswrapper[4799]: I0216 12:43:59.867934 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" event={"ID":"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa","Type":"ContainerStarted","Data":"16117cc18d2bb0a42d806258ab6acac6a1491495acfbfaf0fa0bee25217b8b40"} Feb 16 12:44:01 crc kubenswrapper[4799]: I0216 12:44:01.886914 4799 generic.go:334] "Generic (PLEG): container finished" podID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerID="04df9f0a96d96872fda44bd6108d3d0a8994c132bc54a25980fbc555a9908343" exitCode=0 Feb 16 12:44:01 crc kubenswrapper[4799]: I0216 12:44:01.887079 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" event={"ID":"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa","Type":"ContainerDied","Data":"04df9f0a96d96872fda44bd6108d3d0a8994c132bc54a25980fbc555a9908343"} Feb 16 12:44:02 crc kubenswrapper[4799]: I0216 12:44:02.897933 4799 generic.go:334] "Generic (PLEG): container finished" podID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerID="52390b163b2330368905c471348e8a05a72368619167dc4ac3341ba0258202b5" exitCode=0 Feb 16 12:44:02 crc kubenswrapper[4799]: I0216 12:44:02.898061 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" event={"ID":"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa","Type":"ContainerDied","Data":"52390b163b2330368905c471348e8a05a72368619167dc4ac3341ba0258202b5"} Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.270291 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.341292 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-util\") pod \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.341456 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-bundle\") pod \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.341513 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4dcp\" (UniqueName: \"kubernetes.io/projected/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-kube-api-access-q4dcp\") pod \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\" (UID: \"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa\") " Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.348473 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-bundle" (OuterVolumeSpecName: "bundle") pod "b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" (UID: "b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.352512 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-kube-api-access-q4dcp" (OuterVolumeSpecName: "kube-api-access-q4dcp") pod "b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" (UID: "b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa"). InnerVolumeSpecName "kube-api-access-q4dcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.362112 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-util" (OuterVolumeSpecName: "util") pod "b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" (UID: "b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.442924 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.442977 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4dcp\" (UniqueName: \"kubernetes.io/projected/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-kube-api-access-q4dcp\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.443001 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa-util\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.918544 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" event={"ID":"b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa","Type":"ContainerDied","Data":"16117cc18d2bb0a42d806258ab6acac6a1491495acfbfaf0fa0bee25217b8b40"} Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.918633 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16117cc18d2bb0a42d806258ab6acac6a1491495acfbfaf0fa0bee25217b8b40" Feb 16 12:44:04 crc kubenswrapper[4799]: I0216 12:44:04.918662 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s" Feb 16 12:44:07 crc kubenswrapper[4799]: I0216 12:44:07.104744 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pgspl" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.479292 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr"] Feb 16 12:44:16 crc kubenswrapper[4799]: E0216 12:44:16.480548 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="util" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.480569 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="util" Feb 16 12:44:16 crc kubenswrapper[4799]: E0216 12:44:16.480585 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="extract" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.480593 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="extract" Feb 16 12:44:16 crc kubenswrapper[4799]: E0216 12:44:16.480615 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="pull" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.480624 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="pull" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.480757 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa" containerName="extract" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.481274 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.484924 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.486664 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.486783 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-62kmz" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.502242 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.546524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lksjh\" (UniqueName: \"kubernetes.io/projected/ac6a624e-f6f1-44b4-b236-99307dfc75b3-kube-api-access-lksjh\") pod \"obo-prometheus-operator-68bc856cb9-l48qr\" (UID: \"ac6a624e-f6f1-44b4-b236-99307dfc75b3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.601514 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.602354 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.605149 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.605235 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-6g8w2" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.620366 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.623913 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.624703 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.647987 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25240a98-4447-4af0-89d7-8868fed65af8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8\" (UID: \"25240a98-4447-4af0-89d7-8868fed65af8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.648083 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lksjh\" (UniqueName: \"kubernetes.io/projected/ac6a624e-f6f1-44b4-b236-99307dfc75b3-kube-api-access-lksjh\") pod \"obo-prometheus-operator-68bc856cb9-l48qr\" (UID: \"ac6a624e-f6f1-44b4-b236-99307dfc75b3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.648161 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25240a98-4447-4af0-89d7-8868fed65af8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8\" (UID: \"25240a98-4447-4af0-89d7-8868fed65af8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.656040 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.695517 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lksjh\" (UniqueName: \"kubernetes.io/projected/ac6a624e-f6f1-44b4-b236-99307dfc75b3-kube-api-access-lksjh\") pod \"obo-prometheus-operator-68bc856cb9-l48qr\" (UID: \"ac6a624e-f6f1-44b4-b236-99307dfc75b3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.750022 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25240a98-4447-4af0-89d7-8868fed65af8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8\" (UID: \"25240a98-4447-4af0-89d7-8868fed65af8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.750123 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/956b64fb-674a-40a6-be9b-b249d5b03aab-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr\" (UID: \"956b64fb-674a-40a6-be9b-b249d5b03aab\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.750174 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25240a98-4447-4af0-89d7-8868fed65af8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8\" (UID: \"25240a98-4447-4af0-89d7-8868fed65af8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.750194 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/956b64fb-674a-40a6-be9b-b249d5b03aab-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr\" (UID: \"956b64fb-674a-40a6-be9b-b249d5b03aab\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.756250 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25240a98-4447-4af0-89d7-8868fed65af8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8\" (UID: \"25240a98-4447-4af0-89d7-8868fed65af8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.757984 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25240a98-4447-4af0-89d7-8868fed65af8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8\" (UID: \"25240a98-4447-4af0-89d7-8868fed65af8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.799457 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9kr64"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.800647 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.800879 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.803814 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zsntr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.805247 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.830657 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9kr64"] Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.851703 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f31c8ae-d209-4bed-8ed7-f568f713bd15-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9kr64\" (UID: \"1f31c8ae-d209-4bed-8ed7-f568f713bd15\") " pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.851792 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/956b64fb-674a-40a6-be9b-b249d5b03aab-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr\" (UID: \"956b64fb-674a-40a6-be9b-b249d5b03aab\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.851844 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/956b64fb-674a-40a6-be9b-b249d5b03aab-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr\" (UID: \"956b64fb-674a-40a6-be9b-b249d5b03aab\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.851888 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvc8n\" (UniqueName: \"kubernetes.io/projected/1f31c8ae-d209-4bed-8ed7-f568f713bd15-kube-api-access-xvc8n\") pod \"observability-operator-59bdc8b94-9kr64\" (UID: \"1f31c8ae-d209-4bed-8ed7-f568f713bd15\") " pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.856486 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/956b64fb-674a-40a6-be9b-b249d5b03aab-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr\" (UID: \"956b64fb-674a-40a6-be9b-b249d5b03aab\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.856520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/956b64fb-674a-40a6-be9b-b249d5b03aab-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr\" (UID: \"956b64fb-674a-40a6-be9b-b249d5b03aab\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.917673 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.939364 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.953609 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvc8n\" (UniqueName: \"kubernetes.io/projected/1f31c8ae-d209-4bed-8ed7-f568f713bd15-kube-api-access-xvc8n\") pod \"observability-operator-59bdc8b94-9kr64\" (UID: \"1f31c8ae-d209-4bed-8ed7-f568f713bd15\") " pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.953682 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f31c8ae-d209-4bed-8ed7-f568f713bd15-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9kr64\" (UID: \"1f31c8ae-d209-4bed-8ed7-f568f713bd15\") " pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.965029 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f31c8ae-d209-4bed-8ed7-f568f713bd15-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9kr64\" (UID: \"1f31c8ae-d209-4bed-8ed7-f568f713bd15\") " pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:16 crc kubenswrapper[4799]: I0216 12:44:16.981808 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvc8n\" (UniqueName: \"kubernetes.io/projected/1f31c8ae-d209-4bed-8ed7-f568f713bd15-kube-api-access-xvc8n\") pod \"observability-operator-59bdc8b94-9kr64\" (UID: \"1f31c8ae-d209-4bed-8ed7-f568f713bd15\") " pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.028372 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fp4wv"] Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.029602 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.037448 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-bg2d5" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.053769 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fp4wv"] Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.125246 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.138813 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr"] Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.157051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbz5j\" (UniqueName: \"kubernetes.io/projected/ae279f38-d065-46a1-adb4-671588c18906-kube-api-access-xbz5j\") pod \"perses-operator-5bf474d74f-fp4wv\" (UID: \"ae279f38-d065-46a1-adb4-671588c18906\") " pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.157115 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae279f38-d065-46a1-adb4-671588c18906-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fp4wv\" (UID: \"ae279f38-d065-46a1-adb4-671588c18906\") " pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.260425 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbz5j\" (UniqueName: \"kubernetes.io/projected/ae279f38-d065-46a1-adb4-671588c18906-kube-api-access-xbz5j\") pod \"perses-operator-5bf474d74f-fp4wv\" (UID: \"ae279f38-d065-46a1-adb4-671588c18906\") " pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.260486 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae279f38-d065-46a1-adb4-671588c18906-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fp4wv\" (UID: \"ae279f38-d065-46a1-adb4-671588c18906\") " pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.262894 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae279f38-d065-46a1-adb4-671588c18906-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fp4wv\" (UID: \"ae279f38-d065-46a1-adb4-671588c18906\") " pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.296960 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbz5j\" (UniqueName: \"kubernetes.io/projected/ae279f38-d065-46a1-adb4-671588c18906-kube-api-access-xbz5j\") pod \"perses-operator-5bf474d74f-fp4wv\" (UID: \"ae279f38-d065-46a1-adb4-671588c18906\") " pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.359325 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.406010 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8"] Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.485465 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9kr64"] Feb 16 12:44:17 crc kubenswrapper[4799]: W0216 12:44:17.508499 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f31c8ae_d209_4bed_8ed7_f568f713bd15.slice/crio-1ac66a0d4412f3ba051caa7fce023793a67a27263c8b3e8275e9061c1b9d30c4 WatchSource:0}: Error finding container 1ac66a0d4412f3ba051caa7fce023793a67a27263c8b3e8275e9061c1b9d30c4: Status 404 returned error can't find the container with id 1ac66a0d4412f3ba051caa7fce023793a67a27263c8b3e8275e9061c1b9d30c4 Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.546146 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr"] Feb 16 12:44:17 crc kubenswrapper[4799]: W0216 12:44:17.553904 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod956b64fb_674a_40a6_be9b_b249d5b03aab.slice/crio-65b359f1cd1ab35a1577c7a2919ac724dd808b35a2e7acda8975f3593a40de5a WatchSource:0}: Error finding container 65b359f1cd1ab35a1577c7a2919ac724dd808b35a2e7acda8975f3593a40de5a: Status 404 returned error can't find the container with id 65b359f1cd1ab35a1577c7a2919ac724dd808b35a2e7acda8975f3593a40de5a Feb 16 12:44:17 crc kubenswrapper[4799]: I0216 12:44:17.608465 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fp4wv"] Feb 16 12:44:17 crc kubenswrapper[4799]: W0216 12:44:17.617758 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae279f38_d065_46a1_adb4_671588c18906.slice/crio-18683958b00c38f64bce145984cf75b7e9e51d931b34beccd2200f3adf787d7b WatchSource:0}: Error finding container 18683958b00c38f64bce145984cf75b7e9e51d931b34beccd2200f3adf787d7b: Status 404 returned error can't find the container with id 18683958b00c38f64bce145984cf75b7e9e51d931b34beccd2200f3adf787d7b Feb 16 12:44:18 crc kubenswrapper[4799]: I0216 12:44:18.045595 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" event={"ID":"956b64fb-674a-40a6-be9b-b249d5b03aab","Type":"ContainerStarted","Data":"65b359f1cd1ab35a1577c7a2919ac724dd808b35a2e7acda8975f3593a40de5a"} Feb 16 12:44:18 crc kubenswrapper[4799]: I0216 12:44:18.047250 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" event={"ID":"25240a98-4447-4af0-89d7-8868fed65af8","Type":"ContainerStarted","Data":"dd478def72277274741542d82343d55e80b9b870f2f996b3aab65f89cfc9e34a"} Feb 16 12:44:18 crc kubenswrapper[4799]: I0216 12:44:18.048314 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" event={"ID":"ae279f38-d065-46a1-adb4-671588c18906","Type":"ContainerStarted","Data":"18683958b00c38f64bce145984cf75b7e9e51d931b34beccd2200f3adf787d7b"} Feb 16 12:44:18 crc kubenswrapper[4799]: I0216 12:44:18.049410 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" event={"ID":"ac6a624e-f6f1-44b4-b236-99307dfc75b3","Type":"ContainerStarted","Data":"d0d6286ad2db894bb3083113bee6c69e110fa62f1ad56e3120302836fc99d53f"} Feb 16 12:44:18 crc kubenswrapper[4799]: I0216 12:44:18.050513 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" event={"ID":"1f31c8ae-d209-4bed-8ed7-f568f713bd15","Type":"ContainerStarted","Data":"1ac66a0d4412f3ba051caa7fce023793a67a27263c8b3e8275e9061c1b9d30c4"} Feb 16 12:44:21 crc kubenswrapper[4799]: I0216 12:44:21.220742 4799 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.645218 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnw9t"] Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.647368 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.669687 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnw9t"] Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.695923 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2kq\" (UniqueName: \"kubernetes.io/projected/4ffce420-143b-409d-86cc-868058424a33-kube-api-access-tl2kq\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.696076 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-utilities\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.697040 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-catalog-content\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.798367 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl2kq\" (UniqueName: \"kubernetes.io/projected/4ffce420-143b-409d-86cc-868058424a33-kube-api-access-tl2kq\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.798440 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-utilities\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.798491 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-catalog-content\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.799224 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-catalog-content\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.799335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-utilities\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.848532 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl2kq\" (UniqueName: \"kubernetes.io/projected/4ffce420-143b-409d-86cc-868058424a33-kube-api-access-tl2kq\") pod \"certified-operators-dnw9t\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:25 crc kubenswrapper[4799]: I0216 12:44:25.988668 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:29 crc kubenswrapper[4799]: I0216 12:44:29.849467 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnw9t"] Feb 16 12:44:29 crc kubenswrapper[4799]: W0216 12:44:29.854062 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ffce420_143b_409d_86cc_868058424a33.slice/crio-266ee2ac87c43d37f86b50abcc68d40ffa56de4b7761f9ac72fe8405907b2b6c WatchSource:0}: Error finding container 266ee2ac87c43d37f86b50abcc68d40ffa56de4b7761f9ac72fe8405907b2b6c: Status 404 returned error can't find the container with id 266ee2ac87c43d37f86b50abcc68d40ffa56de4b7761f9ac72fe8405907b2b6c Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.200630 4799 generic.go:334] "Generic (PLEG): container finished" podID="4ffce420-143b-409d-86cc-868058424a33" containerID="b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d" exitCode=0 Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.200741 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnw9t" event={"ID":"4ffce420-143b-409d-86cc-868058424a33","Type":"ContainerDied","Data":"b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.201159 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnw9t" event={"ID":"4ffce420-143b-409d-86cc-868058424a33","Type":"ContainerStarted","Data":"266ee2ac87c43d37f86b50abcc68d40ffa56de4b7761f9ac72fe8405907b2b6c"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.204886 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" event={"ID":"1f31c8ae-d209-4bed-8ed7-f568f713bd15","Type":"ContainerStarted","Data":"240d112cbb535ea37ac0dd3e19676c7879d01edc14987f9455bc655a63eca9f2"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.210417 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.216787 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.222637 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" event={"ID":"956b64fb-674a-40a6-be9b-b249d5b03aab","Type":"ContainerStarted","Data":"fb8cee415a342f304ba47d49b76875e32d8c85adb02edf39aec69a7d86bab3c0"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.224104 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" event={"ID":"25240a98-4447-4af0-89d7-8868fed65af8","Type":"ContainerStarted","Data":"310f57643f7290bfcb609e4d25a21f732a1f73a5af22cba1ee385c14ebbb0a10"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.226948 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" event={"ID":"ae279f38-d065-46a1-adb4-671588c18906","Type":"ContainerStarted","Data":"bd37ff4f534c91880d9159a78167689b6d1bc90bf97e0c31a41cd69abd5fce14"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.227185 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.230821 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" event={"ID":"ac6a624e-f6f1-44b4-b236-99307dfc75b3","Type":"ContainerStarted","Data":"70a2dd5b49218e3b414cdefea35b75e44cee4b5f9c78cbb1e900ecfac30a030b"} Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.250910 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" podStartSLOduration=2.66269242 podStartE2EDuration="14.250884199s" podCreationTimestamp="2026-02-16 12:44:16 +0000 UTC" firstStartedPulling="2026-02-16 12:44:17.620327431 +0000 UTC m=+763.213342765" lastFinishedPulling="2026-02-16 12:44:29.20851921 +0000 UTC m=+774.801534544" observedRunningTime="2026-02-16 12:44:30.247734161 +0000 UTC m=+775.840749495" watchObservedRunningTime="2026-02-16 12:44:30.250884199 +0000 UTC m=+775.843899533" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.271596 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8" podStartSLOduration=2.502339999 podStartE2EDuration="14.271573514s" podCreationTimestamp="2026-02-16 12:44:16 +0000 UTC" firstStartedPulling="2026-02-16 12:44:17.439414798 +0000 UTC m=+763.032430132" lastFinishedPulling="2026-02-16 12:44:29.208648313 +0000 UTC m=+774.801663647" observedRunningTime="2026-02-16 12:44:30.268373385 +0000 UTC m=+775.861388759" watchObservedRunningTime="2026-02-16 12:44:30.271573514 +0000 UTC m=+775.864588858" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.344523 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr" podStartSLOduration=2.69356712 podStartE2EDuration="14.344501723s" podCreationTimestamp="2026-02-16 12:44:16 +0000 UTC" firstStartedPulling="2026-02-16 12:44:17.558369848 +0000 UTC m=+763.151385182" lastFinishedPulling="2026-02-16 12:44:29.209304451 +0000 UTC m=+774.802319785" observedRunningTime="2026-02-16 12:44:30.334653709 +0000 UTC m=+775.927669063" watchObservedRunningTime="2026-02-16 12:44:30.344501723 +0000 UTC m=+775.937517057" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.379569 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9kr64" podStartSLOduration=2.515800804 podStartE2EDuration="14.379540518s" podCreationTimestamp="2026-02-16 12:44:16 +0000 UTC" firstStartedPulling="2026-02-16 12:44:17.512299256 +0000 UTC m=+763.105314590" lastFinishedPulling="2026-02-16 12:44:29.37603897 +0000 UTC m=+774.969054304" observedRunningTime="2026-02-16 12:44:30.374504938 +0000 UTC m=+775.967520272" watchObservedRunningTime="2026-02-16 12:44:30.379540518 +0000 UTC m=+775.972555842" Feb 16 12:44:30 crc kubenswrapper[4799]: I0216 12:44:30.405577 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l48qr" podStartSLOduration=2.250584575 podStartE2EDuration="14.405556512s" podCreationTimestamp="2026-02-16 12:44:16 +0000 UTC" firstStartedPulling="2026-02-16 12:44:17.220401605 +0000 UTC m=+762.813416939" lastFinishedPulling="2026-02-16 12:44:29.375373542 +0000 UTC m=+774.968388876" observedRunningTime="2026-02-16 12:44:30.402894688 +0000 UTC m=+775.995910022" watchObservedRunningTime="2026-02-16 12:44:30.405556512 +0000 UTC m=+775.998571846" Feb 16 12:44:32 crc kubenswrapper[4799]: I0216 12:44:32.249592 4799 generic.go:334] "Generic (PLEG): container finished" podID="4ffce420-143b-409d-86cc-868058424a33" containerID="711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d" exitCode=0 Feb 16 12:44:32 crc kubenswrapper[4799]: I0216 12:44:32.249884 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnw9t" event={"ID":"4ffce420-143b-409d-86cc-868058424a33","Type":"ContainerDied","Data":"711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d"} Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.193720 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpbjl"] Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.205904 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.209966 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpbjl"] Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.350497 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8844\" (UniqueName: \"kubernetes.io/projected/391df744-79b4-46ea-aabf-a4616b070f57-kube-api-access-v8844\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.351096 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-catalog-content\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.351253 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-utilities\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.453036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8844\" (UniqueName: \"kubernetes.io/projected/391df744-79b4-46ea-aabf-a4616b070f57-kube-api-access-v8844\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.453157 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-catalog-content\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.453187 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-utilities\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.453790 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-utilities\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.454009 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-catalog-content\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.482303 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8844\" (UniqueName: \"kubernetes.io/projected/391df744-79b4-46ea-aabf-a4616b070f57-kube-api-access-v8844\") pod \"redhat-operators-qpbjl\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.522253 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:33 crc kubenswrapper[4799]: I0216 12:44:33.922690 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpbjl"] Feb 16 12:44:34 crc kubenswrapper[4799]: I0216 12:44:34.276611 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerStarted","Data":"639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df"} Feb 16 12:44:34 crc kubenswrapper[4799]: I0216 12:44:34.276687 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerStarted","Data":"4554bcb6fd8d7a39e996dc4695e15cc6d48fee451bd57eac6d4d182d33886559"} Feb 16 12:44:35 crc kubenswrapper[4799]: I0216 12:44:35.285173 4799 generic.go:334] "Generic (PLEG): container finished" podID="391df744-79b4-46ea-aabf-a4616b070f57" containerID="639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df" exitCode=0 Feb 16 12:44:35 crc kubenswrapper[4799]: I0216 12:44:35.285271 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerDied","Data":"639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df"} Feb 16 12:44:35 crc kubenswrapper[4799]: I0216 12:44:35.285640 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerStarted","Data":"de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b"} Feb 16 12:44:35 crc kubenswrapper[4799]: I0216 12:44:35.288688 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnw9t" event={"ID":"4ffce420-143b-409d-86cc-868058424a33","Type":"ContainerStarted","Data":"49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa"} Feb 16 12:44:35 crc kubenswrapper[4799]: I0216 12:44:35.989593 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:35 crc kubenswrapper[4799]: I0216 12:44:35.989651 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:36 crc kubenswrapper[4799]: I0216 12:44:36.297500 4799 generic.go:334] "Generic (PLEG): container finished" podID="391df744-79b4-46ea-aabf-a4616b070f57" containerID="de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b" exitCode=0 Feb 16 12:44:36 crc kubenswrapper[4799]: I0216 12:44:36.297611 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerDied","Data":"de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b"} Feb 16 12:44:36 crc kubenswrapper[4799]: I0216 12:44:36.323278 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnw9t" podStartSLOduration=7.280998931 podStartE2EDuration="11.323252926s" podCreationTimestamp="2026-02-16 12:44:25 +0000 UTC" firstStartedPulling="2026-02-16 12:44:30.202823792 +0000 UTC m=+775.795839136" lastFinishedPulling="2026-02-16 12:44:34.245077797 +0000 UTC m=+779.838093131" observedRunningTime="2026-02-16 12:44:35.327757443 +0000 UTC m=+780.920772777" watchObservedRunningTime="2026-02-16 12:44:36.323252926 +0000 UTC m=+781.916268280" Feb 16 12:44:37 crc kubenswrapper[4799]: I0216 12:44:37.035543 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dnw9t" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="registry-server" probeResult="failure" output=< Feb 16 12:44:37 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:44:37 crc kubenswrapper[4799]: > Feb 16 12:44:37 crc kubenswrapper[4799]: I0216 12:44:37.311101 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerStarted","Data":"ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b"} Feb 16 12:44:37 crc kubenswrapper[4799]: I0216 12:44:37.336631 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpbjl" podStartSLOduration=1.752870685 podStartE2EDuration="4.33660763s" podCreationTimestamp="2026-02-16 12:44:33 +0000 UTC" firstStartedPulling="2026-02-16 12:44:34.278675208 +0000 UTC m=+779.871690542" lastFinishedPulling="2026-02-16 12:44:36.862412143 +0000 UTC m=+782.455427487" observedRunningTime="2026-02-16 12:44:37.332796053 +0000 UTC m=+782.925811387" watchObservedRunningTime="2026-02-16 12:44:37.33660763 +0000 UTC m=+782.929622964" Feb 16 12:44:37 crc kubenswrapper[4799]: I0216 12:44:37.363255 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-fp4wv" Feb 16 12:44:43 crc kubenswrapper[4799]: I0216 12:44:43.522915 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:43 crc kubenswrapper[4799]: I0216 12:44:43.523752 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:44 crc kubenswrapper[4799]: I0216 12:44:44.585716 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpbjl" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="registry-server" probeResult="failure" output=< Feb 16 12:44:44 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:44:44 crc kubenswrapper[4799]: > Feb 16 12:44:46 crc kubenswrapper[4799]: I0216 12:44:46.095527 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:46 crc kubenswrapper[4799]: I0216 12:44:46.187344 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:48 crc kubenswrapper[4799]: I0216 12:44:48.574328 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnw9t"] Feb 16 12:44:48 crc kubenswrapper[4799]: I0216 12:44:48.574975 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnw9t" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="registry-server" containerID="cri-o://49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa" gracePeriod=2 Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.007794 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.093714 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-utilities\") pod \"4ffce420-143b-409d-86cc-868058424a33\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.094048 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl2kq\" (UniqueName: \"kubernetes.io/projected/4ffce420-143b-409d-86cc-868058424a33-kube-api-access-tl2kq\") pod \"4ffce420-143b-409d-86cc-868058424a33\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.094213 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-catalog-content\") pod \"4ffce420-143b-409d-86cc-868058424a33\" (UID: \"4ffce420-143b-409d-86cc-868058424a33\") " Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.095712 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-utilities" (OuterVolumeSpecName: "utilities") pod "4ffce420-143b-409d-86cc-868058424a33" (UID: "4ffce420-143b-409d-86cc-868058424a33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.101813 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffce420-143b-409d-86cc-868058424a33-kube-api-access-tl2kq" (OuterVolumeSpecName: "kube-api-access-tl2kq") pod "4ffce420-143b-409d-86cc-868058424a33" (UID: "4ffce420-143b-409d-86cc-868058424a33"). InnerVolumeSpecName "kube-api-access-tl2kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.143015 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ffce420-143b-409d-86cc-868058424a33" (UID: "4ffce420-143b-409d-86cc-868058424a33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.196923 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.196982 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl2kq\" (UniqueName: \"kubernetes.io/projected/4ffce420-143b-409d-86cc-868058424a33-kube-api-access-tl2kq\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.196995 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffce420-143b-409d-86cc-868058424a33-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.398589 4799 generic.go:334] "Generic (PLEG): container finished" podID="4ffce420-143b-409d-86cc-868058424a33" containerID="49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa" exitCode=0 Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.398686 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnw9t" event={"ID":"4ffce420-143b-409d-86cc-868058424a33","Type":"ContainerDied","Data":"49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa"} Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.398706 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnw9t" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.399172 4799 scope.go:117] "RemoveContainer" containerID="49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.399152 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnw9t" event={"ID":"4ffce420-143b-409d-86cc-868058424a33","Type":"ContainerDied","Data":"266ee2ac87c43d37f86b50abcc68d40ffa56de4b7761f9ac72fe8405907b2b6c"} Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.430950 4799 scope.go:117] "RemoveContainer" containerID="711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.435758 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnw9t"] Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.441551 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnw9t"] Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.457310 4799 scope.go:117] "RemoveContainer" containerID="b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.472521 4799 scope.go:117] "RemoveContainer" containerID="49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa" Feb 16 12:44:49 crc kubenswrapper[4799]: E0216 12:44:49.473197 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa\": container with ID starting with 49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa not found: ID does not exist" containerID="49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.473273 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa"} err="failed to get container status \"49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa\": rpc error: code = NotFound desc = could not find container \"49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa\": container with ID starting with 49df1ec78f4f67854970584588a3672da50e20cf2ea10324e33af3fcfbe79afa not found: ID does not exist" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.473317 4799 scope.go:117] "RemoveContainer" containerID="711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d" Feb 16 12:44:49 crc kubenswrapper[4799]: E0216 12:44:49.473909 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d\": container with ID starting with 711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d not found: ID does not exist" containerID="711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.474000 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d"} err="failed to get container status \"711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d\": rpc error: code = NotFound desc = could not find container \"711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d\": container with ID starting with 711b50e1dc1dc4d9348fdfd7c32fd61e4da3811bc2d63d8d7a5484d29f46773d not found: ID does not exist" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.474076 4799 scope.go:117] "RemoveContainer" containerID="b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d" Feb 16 12:44:49 crc kubenswrapper[4799]: E0216 12:44:49.474416 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d\": container with ID starting with b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d not found: ID does not exist" containerID="b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d" Feb 16 12:44:49 crc kubenswrapper[4799]: I0216 12:44:49.474443 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d"} err="failed to get container status \"b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d\": rpc error: code = NotFound desc = could not find container \"b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d\": container with ID starting with b2d481d0cab052499314ddb4a57b1804796e852f925d018723ed09788780db8d not found: ID does not exist" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.405764 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5929g"] Feb 16 12:44:50 crc kubenswrapper[4799]: E0216 12:44:50.406315 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="extract-utilities" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.406351 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="extract-utilities" Feb 16 12:44:50 crc kubenswrapper[4799]: E0216 12:44:50.406392 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="extract-content" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.406409 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="extract-content" Feb 16 12:44:50 crc kubenswrapper[4799]: E0216 12:44:50.406430 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="registry-server" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.406450 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="registry-server" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.406718 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffce420-143b-409d-86cc-868058424a33" containerName="registry-server" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.408729 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.422252 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5929g"] Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.520548 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-utilities\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.520637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kk7j\" (UniqueName: \"kubernetes.io/projected/7aeee0fe-2c9f-4a03-aee0-53d572933e64-kube-api-access-4kk7j\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.520720 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-catalog-content\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.622752 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-utilities\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.622888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kk7j\" (UniqueName: \"kubernetes.io/projected/7aeee0fe-2c9f-4a03-aee0-53d572933e64-kube-api-access-4kk7j\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.622972 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-catalog-content\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.623499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-utilities\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.623755 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-catalog-content\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.653414 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kk7j\" (UniqueName: \"kubernetes.io/projected/7aeee0fe-2c9f-4a03-aee0-53d572933e64-kube-api-access-4kk7j\") pod \"community-operators-5929g\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:50 crc kubenswrapper[4799]: I0216 12:44:50.776059 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:44:51 crc kubenswrapper[4799]: I0216 12:44:51.157610 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffce420-143b-409d-86cc-868058424a33" path="/var/lib/kubelet/pods/4ffce420-143b-409d-86cc-868058424a33/volumes" Feb 16 12:44:51 crc kubenswrapper[4799]: I0216 12:44:51.332192 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5929g"] Feb 16 12:44:51 crc kubenswrapper[4799]: W0216 12:44:51.345672 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aeee0fe_2c9f_4a03_aee0_53d572933e64.slice/crio-9c34b97be20c14af979e64c73f6244424916a1100988f6462feb0dd092dc8c54 WatchSource:0}: Error finding container 9c34b97be20c14af979e64c73f6244424916a1100988f6462feb0dd092dc8c54: Status 404 returned error can't find the container with id 9c34b97be20c14af979e64c73f6244424916a1100988f6462feb0dd092dc8c54 Feb 16 12:44:51 crc kubenswrapper[4799]: I0216 12:44:51.420689 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerStarted","Data":"9c34b97be20c14af979e64c73f6244424916a1100988f6462feb0dd092dc8c54"} Feb 16 12:44:51 crc kubenswrapper[4799]: I0216 12:44:51.793545 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:44:51 crc kubenswrapper[4799]: I0216 12:44:51.794323 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:44:52 crc kubenswrapper[4799]: I0216 12:44:52.431677 4799 generic.go:334] "Generic (PLEG): container finished" podID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerID="eb02e4defab5ed04e1e0c24a6e8f26d351acccd981ccbb6a298d6f0333373787" exitCode=0 Feb 16 12:44:52 crc kubenswrapper[4799]: I0216 12:44:52.431751 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerDied","Data":"eb02e4defab5ed04e1e0c24a6e8f26d351acccd981ccbb6a298d6f0333373787"} Feb 16 12:44:53 crc kubenswrapper[4799]: I0216 12:44:53.438423 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerStarted","Data":"a2297f9d7899b991375a02e0894dc71d584445c6f8e3bb2834d711eaa331450e"} Feb 16 12:44:53 crc kubenswrapper[4799]: I0216 12:44:53.570889 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:53 crc kubenswrapper[4799]: I0216 12:44:53.618197 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:54 crc kubenswrapper[4799]: I0216 12:44:54.449919 4799 generic.go:334] "Generic (PLEG): container finished" podID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerID="a2297f9d7899b991375a02e0894dc71d584445c6f8e3bb2834d711eaa331450e" exitCode=0 Feb 16 12:44:54 crc kubenswrapper[4799]: I0216 12:44:54.450013 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerDied","Data":"a2297f9d7899b991375a02e0894dc71d584445c6f8e3bb2834d711eaa331450e"} Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.054740 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg"] Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.070032 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.075663 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg"] Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.075934 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.204109 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqlz\" (UniqueName: \"kubernetes.io/projected/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-kube-api-access-6cqlz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.204504 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.204663 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.306056 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.307310 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.307804 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqlz\" (UniqueName: \"kubernetes.io/projected/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-kube-api-access-6cqlz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.307859 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.308148 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.342438 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqlz\" (UniqueName: \"kubernetes.io/projected/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-kube-api-access-6cqlz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.461215 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerStarted","Data":"93f08f37416c71d9a0847878be8a94fdfe19841fe3d2f77c684e8fb95010752e"} Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.482330 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.516950 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5929g" podStartSLOduration=2.9031425029999998 podStartE2EDuration="5.5169174s" podCreationTimestamp="2026-02-16 12:44:50 +0000 UTC" firstStartedPulling="2026-02-16 12:44:52.434828641 +0000 UTC m=+798.027844025" lastFinishedPulling="2026-02-16 12:44:55.048603578 +0000 UTC m=+800.641618922" observedRunningTime="2026-02-16 12:44:55.511696904 +0000 UTC m=+801.104712238" watchObservedRunningTime="2026-02-16 12:44:55.5169174 +0000 UTC m=+801.109932744" Feb 16 12:44:55 crc kubenswrapper[4799]: I0216 12:44:55.773350 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg"] Feb 16 12:44:56 crc kubenswrapper[4799]: I0216 12:44:56.469583 4799 generic.go:334] "Generic (PLEG): container finished" podID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerID="43fc4d192f3b4d1e6d5d458e3ba0dccfc5e63494d047ec58462366ba1ab33606" exitCode=0 Feb 16 12:44:56 crc kubenswrapper[4799]: I0216 12:44:56.469672 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" event={"ID":"3b4d0f13-5b46-4300-bed6-54cf596bf6bd","Type":"ContainerDied","Data":"43fc4d192f3b4d1e6d5d458e3ba0dccfc5e63494d047ec58462366ba1ab33606"} Feb 16 12:44:56 crc kubenswrapper[4799]: I0216 12:44:56.469735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" event={"ID":"3b4d0f13-5b46-4300-bed6-54cf596bf6bd","Type":"ContainerStarted","Data":"0ad88082b6c63c1bbed523a85ae8695cfb5e1ac419e3219c14e6cdb0ba2dba4b"} Feb 16 12:44:56 crc kubenswrapper[4799]: I0216 12:44:56.977677 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpbjl"] Feb 16 12:44:56 crc kubenswrapper[4799]: I0216 12:44:56.978768 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpbjl" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="registry-server" containerID="cri-o://ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b" gracePeriod=2 Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.442211 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.484528 4799 generic.go:334] "Generic (PLEG): container finished" podID="391df744-79b4-46ea-aabf-a4616b070f57" containerID="ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b" exitCode=0 Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.484594 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerDied","Data":"ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b"} Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.484632 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpbjl" event={"ID":"391df744-79b4-46ea-aabf-a4616b070f57","Type":"ContainerDied","Data":"4554bcb6fd8d7a39e996dc4695e15cc6d48fee451bd57eac6d4d182d33886559"} Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.484658 4799 scope.go:117] "RemoveContainer" containerID="ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.484821 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpbjl" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.515868 4799 scope.go:117] "RemoveContainer" containerID="de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.538692 4799 scope.go:117] "RemoveContainer" containerID="639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.545443 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-utilities\") pod \"391df744-79b4-46ea-aabf-a4616b070f57\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.545503 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8844\" (UniqueName: \"kubernetes.io/projected/391df744-79b4-46ea-aabf-a4616b070f57-kube-api-access-v8844\") pod \"391df744-79b4-46ea-aabf-a4616b070f57\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.545592 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-catalog-content\") pod \"391df744-79b4-46ea-aabf-a4616b070f57\" (UID: \"391df744-79b4-46ea-aabf-a4616b070f57\") " Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.547735 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-utilities" (OuterVolumeSpecName: "utilities") pod "391df744-79b4-46ea-aabf-a4616b070f57" (UID: "391df744-79b4-46ea-aabf-a4616b070f57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.556888 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391df744-79b4-46ea-aabf-a4616b070f57-kube-api-access-v8844" (OuterVolumeSpecName: "kube-api-access-v8844") pod "391df744-79b4-46ea-aabf-a4616b070f57" (UID: "391df744-79b4-46ea-aabf-a4616b070f57"). InnerVolumeSpecName "kube-api-access-v8844". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.649256 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.649302 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8844\" (UniqueName: \"kubernetes.io/projected/391df744-79b4-46ea-aabf-a4616b070f57-kube-api-access-v8844\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.688984 4799 scope.go:117] "RemoveContainer" containerID="ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b" Feb 16 12:44:57 crc kubenswrapper[4799]: E0216 12:44:57.690487 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b\": container with ID starting with ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b not found: ID does not exist" containerID="ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.690550 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b"} err="failed to get container status \"ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b\": rpc error: code = NotFound desc = could not find container \"ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b\": container with ID starting with ba59ce500a6b7f004f62911e2990bef7b7f438ce6808b4ddbc41ec6c91eec01b not found: ID does not exist" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.690590 4799 scope.go:117] "RemoveContainer" containerID="de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b" Feb 16 12:44:57 crc kubenswrapper[4799]: E0216 12:44:57.691366 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b\": container with ID starting with de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b not found: ID does not exist" containerID="de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.691411 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b"} err="failed to get container status \"de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b\": rpc error: code = NotFound desc = could not find container \"de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b\": container with ID starting with de8f2a84535971b68cdf595361392763bf508e83cfd9c6ee2eade971de20ee1b not found: ID does not exist" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.691448 4799 scope.go:117] "RemoveContainer" containerID="639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df" Feb 16 12:44:57 crc kubenswrapper[4799]: E0216 12:44:57.691793 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df\": container with ID starting with 639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df not found: ID does not exist" containerID="639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.691825 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df"} err="failed to get container status \"639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df\": rpc error: code = NotFound desc = could not find container \"639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df\": container with ID starting with 639ac38f51f2e0738988c009fd5ec7e8dbfcb1fd5c501c3700adb28638f810df not found: ID does not exist" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.700808 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "391df744-79b4-46ea-aabf-a4616b070f57" (UID: "391df744-79b4-46ea-aabf-a4616b070f57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.754067 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391df744-79b4-46ea-aabf-a4616b070f57-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.873230 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpbjl"] Feb 16 12:44:57 crc kubenswrapper[4799]: I0216 12:44:57.887197 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpbjl"] Feb 16 12:44:58 crc kubenswrapper[4799]: I0216 12:44:58.497946 4799 generic.go:334] "Generic (PLEG): container finished" podID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerID="fa310c37b4d05d7d2ed03700e8a17821770434bec55dae781fa37e5aee4505f0" exitCode=0 Feb 16 12:44:58 crc kubenswrapper[4799]: I0216 12:44:58.498002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" event={"ID":"3b4d0f13-5b46-4300-bed6-54cf596bf6bd","Type":"ContainerDied","Data":"fa310c37b4d05d7d2ed03700e8a17821770434bec55dae781fa37e5aee4505f0"} Feb 16 12:44:59 crc kubenswrapper[4799]: I0216 12:44:59.168211 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391df744-79b4-46ea-aabf-a4616b070f57" path="/var/lib/kubelet/pods/391df744-79b4-46ea-aabf-a4616b070f57/volumes" Feb 16 12:44:59 crc kubenswrapper[4799]: I0216 12:44:59.509813 4799 generic.go:334] "Generic (PLEG): container finished" podID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerID="8df0714c185039aba50e7fe268fc9121311bb91ef12c226e02b44d15fb3dfcdb" exitCode=0 Feb 16 12:44:59 crc kubenswrapper[4799]: I0216 12:44:59.509930 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" event={"ID":"3b4d0f13-5b46-4300-bed6-54cf596bf6bd","Type":"ContainerDied","Data":"8df0714c185039aba50e7fe268fc9121311bb91ef12c226e02b44d15fb3dfcdb"} Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.199204 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5"] Feb 16 12:45:00 crc kubenswrapper[4799]: E0216 12:45:00.199707 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="extract-content" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.199729 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="extract-content" Feb 16 12:45:00 crc kubenswrapper[4799]: E0216 12:45:00.199749 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="extract-utilities" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.199764 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="extract-utilities" Feb 16 12:45:00 crc kubenswrapper[4799]: E0216 12:45:00.199787 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="registry-server" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.199798 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="registry-server" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.199972 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="391df744-79b4-46ea-aabf-a4616b070f57" containerName="registry-server" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.200663 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.205568 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5"] Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.206959 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.207303 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.295804 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28edd9b2-9413-409a-a64d-95677b269d33-config-volume\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.295932 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfsf\" (UniqueName: \"kubernetes.io/projected/28edd9b2-9413-409a-a64d-95677b269d33-kube-api-access-rdfsf\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.295995 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28edd9b2-9413-409a-a64d-95677b269d33-secret-volume\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.397513 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28edd9b2-9413-409a-a64d-95677b269d33-config-volume\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.397588 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfsf\" (UniqueName: \"kubernetes.io/projected/28edd9b2-9413-409a-a64d-95677b269d33-kube-api-access-rdfsf\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.397636 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28edd9b2-9413-409a-a64d-95677b269d33-secret-volume\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.399570 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28edd9b2-9413-409a-a64d-95677b269d33-config-volume\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.408459 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28edd9b2-9413-409a-a64d-95677b269d33-secret-volume\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.434153 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfsf\" (UniqueName: \"kubernetes.io/projected/28edd9b2-9413-409a-a64d-95677b269d33-kube-api-access-rdfsf\") pod \"collect-profiles-29520765-m2hr5\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.539403 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.776622 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.777226 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.840215 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:45:00 crc kubenswrapper[4799]: I0216 12:45:00.868414 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.006630 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cqlz\" (UniqueName: \"kubernetes.io/projected/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-kube-api-access-6cqlz\") pod \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.006708 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-util\") pod \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.006741 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-bundle\") pod \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\" (UID: \"3b4d0f13-5b46-4300-bed6-54cf596bf6bd\") " Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.007453 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-bundle" (OuterVolumeSpecName: "bundle") pod "3b4d0f13-5b46-4300-bed6-54cf596bf6bd" (UID: "3b4d0f13-5b46-4300-bed6-54cf596bf6bd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.016328 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-kube-api-access-6cqlz" (OuterVolumeSpecName: "kube-api-access-6cqlz") pod "3b4d0f13-5b46-4300-bed6-54cf596bf6bd" (UID: "3b4d0f13-5b46-4300-bed6-54cf596bf6bd"). InnerVolumeSpecName "kube-api-access-6cqlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.032395 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-util" (OuterVolumeSpecName: "util") pod "3b4d0f13-5b46-4300-bed6-54cf596bf6bd" (UID: "3b4d0f13-5b46-4300-bed6-54cf596bf6bd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.050167 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5"] Feb 16 12:45:01 crc kubenswrapper[4799]: W0216 12:45:01.054826 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28edd9b2_9413_409a_a64d_95677b269d33.slice/crio-4fe5f9696f37cd9b246366908e0fa4f78a65dc4f49db47dfe1376e05569c3181 WatchSource:0}: Error finding container 4fe5f9696f37cd9b246366908e0fa4f78a65dc4f49db47dfe1376e05569c3181: Status 404 returned error can't find the container with id 4fe5f9696f37cd9b246366908e0fa4f78a65dc4f49db47dfe1376e05569c3181 Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.108001 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cqlz\" (UniqueName: \"kubernetes.io/projected/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-kube-api-access-6cqlz\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.108048 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-util\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.108062 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b4d0f13-5b46-4300-bed6-54cf596bf6bd-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.530158 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" event={"ID":"3b4d0f13-5b46-4300-bed6-54cf596bf6bd","Type":"ContainerDied","Data":"0ad88082b6c63c1bbed523a85ae8695cfb5e1ac419e3219c14e6cdb0ba2dba4b"} Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.530208 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad88082b6c63c1bbed523a85ae8695cfb5e1ac419e3219c14e6cdb0ba2dba4b" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.530256 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg" Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.532041 4799 generic.go:334] "Generic (PLEG): container finished" podID="28edd9b2-9413-409a-a64d-95677b269d33" containerID="76f7ea01883bb2cddc8876b812aaa31776902c9c1deb631e9f84351948bd1159" exitCode=0 Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.532147 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" event={"ID":"28edd9b2-9413-409a-a64d-95677b269d33","Type":"ContainerDied","Data":"76f7ea01883bb2cddc8876b812aaa31776902c9c1deb631e9f84351948bd1159"} Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.532203 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" event={"ID":"28edd9b2-9413-409a-a64d-95677b269d33","Type":"ContainerStarted","Data":"4fe5f9696f37cd9b246366908e0fa4f78a65dc4f49db47dfe1376e05569c3181"} Feb 16 12:45:01 crc kubenswrapper[4799]: I0216 12:45:01.593733 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.849355 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.938685 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28edd9b2-9413-409a-a64d-95677b269d33-secret-volume\") pod \"28edd9b2-9413-409a-a64d-95677b269d33\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.938756 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdfsf\" (UniqueName: \"kubernetes.io/projected/28edd9b2-9413-409a-a64d-95677b269d33-kube-api-access-rdfsf\") pod \"28edd9b2-9413-409a-a64d-95677b269d33\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.938866 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28edd9b2-9413-409a-a64d-95677b269d33-config-volume\") pod \"28edd9b2-9413-409a-a64d-95677b269d33\" (UID: \"28edd9b2-9413-409a-a64d-95677b269d33\") " Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.939533 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28edd9b2-9413-409a-a64d-95677b269d33-config-volume" (OuterVolumeSpecName: "config-volume") pod "28edd9b2-9413-409a-a64d-95677b269d33" (UID: "28edd9b2-9413-409a-a64d-95677b269d33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.949274 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28edd9b2-9413-409a-a64d-95677b269d33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28edd9b2-9413-409a-a64d-95677b269d33" (UID: "28edd9b2-9413-409a-a64d-95677b269d33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:45:02 crc kubenswrapper[4799]: I0216 12:45:02.949410 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28edd9b2-9413-409a-a64d-95677b269d33-kube-api-access-rdfsf" (OuterVolumeSpecName: "kube-api-access-rdfsf") pod "28edd9b2-9413-409a-a64d-95677b269d33" (UID: "28edd9b2-9413-409a-a64d-95677b269d33"). InnerVolumeSpecName "kube-api-access-rdfsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:45:03 crc kubenswrapper[4799]: I0216 12:45:03.040639 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28edd9b2-9413-409a-a64d-95677b269d33-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:03 crc kubenswrapper[4799]: I0216 12:45:03.040700 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdfsf\" (UniqueName: \"kubernetes.io/projected/28edd9b2-9413-409a-a64d-95677b269d33-kube-api-access-rdfsf\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:03 crc kubenswrapper[4799]: I0216 12:45:03.040716 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28edd9b2-9413-409a-a64d-95677b269d33-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:03 crc kubenswrapper[4799]: I0216 12:45:03.774310 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5929g"] Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.093172 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5929g" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="registry-server" containerID="cri-o://93f08f37416c71d9a0847878be8a94fdfe19841fe3d2f77c684e8fb95010752e" gracePeriod=2 Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.093954 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.094724 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5" event={"ID":"28edd9b2-9413-409a-a64d-95677b269d33","Type":"ContainerDied","Data":"4fe5f9696f37cd9b246366908e0fa4f78a65dc4f49db47dfe1376e05569c3181"} Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.094754 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe5f9696f37cd9b246366908e0fa4f78a65dc4f49db47dfe1376e05569c3181" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.341874 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9fd4k"] Feb 16 12:45:04 crc kubenswrapper[4799]: E0216 12:45:04.342204 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28edd9b2-9413-409a-a64d-95677b269d33" containerName="collect-profiles" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.342251 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="28edd9b2-9413-409a-a64d-95677b269d33" containerName="collect-profiles" Feb 16 12:45:04 crc kubenswrapper[4799]: E0216 12:45:04.342269 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="util" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.342277 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="util" Feb 16 12:45:04 crc kubenswrapper[4799]: E0216 12:45:04.342289 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="pull" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.342299 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="pull" Feb 16 12:45:04 crc kubenswrapper[4799]: E0216 12:45:04.342343 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="extract" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.342368 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="extract" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.342580 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4d0f13-5b46-4300-bed6-54cf596bf6bd" containerName="extract" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.342609 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="28edd9b2-9413-409a-a64d-95677b269d33" containerName="collect-profiles" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.343408 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.345559 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mlrxn" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.345587 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.346828 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.352188 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9fd4k"] Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.458559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptn69\" (UniqueName: \"kubernetes.io/projected/a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660-kube-api-access-ptn69\") pod \"nmstate-operator-694c9596b7-9fd4k\" (UID: \"a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.559858 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptn69\" (UniqueName: \"kubernetes.io/projected/a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660-kube-api-access-ptn69\") pod \"nmstate-operator-694c9596b7-9fd4k\" (UID: \"a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.582046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptn69\" (UniqueName: \"kubernetes.io/projected/a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660-kube-api-access-ptn69\") pod \"nmstate-operator-694c9596b7-9fd4k\" (UID: \"a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.660149 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" Feb 16 12:45:04 crc kubenswrapper[4799]: I0216 12:45:04.886261 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9fd4k"] Feb 16 12:45:04 crc kubenswrapper[4799]: W0216 12:45:04.893426 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda83cd9e0_dc18_4f68_ac2f_cfbdf85e0660.slice/crio-2987a3a08af5a4d6a1c14ba2703cab1ef1ec63b40c99b0190a81c4508d357afe WatchSource:0}: Error finding container 2987a3a08af5a4d6a1c14ba2703cab1ef1ec63b40c99b0190a81c4508d357afe: Status 404 returned error can't find the container with id 2987a3a08af5a4d6a1c14ba2703cab1ef1ec63b40c99b0190a81c4508d357afe Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.100550 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" event={"ID":"a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660","Type":"ContainerStarted","Data":"2987a3a08af5a4d6a1c14ba2703cab1ef1ec63b40c99b0190a81c4508d357afe"} Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.103289 4799 generic.go:334] "Generic (PLEG): container finished" podID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerID="93f08f37416c71d9a0847878be8a94fdfe19841fe3d2f77c684e8fb95010752e" exitCode=0 Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.103354 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerDied","Data":"93f08f37416c71d9a0847878be8a94fdfe19841fe3d2f77c684e8fb95010752e"} Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.103414 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5929g" event={"ID":"7aeee0fe-2c9f-4a03-aee0-53d572933e64","Type":"ContainerDied","Data":"9c34b97be20c14af979e64c73f6244424916a1100988f6462feb0dd092dc8c54"} Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.103427 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c34b97be20c14af979e64c73f6244424916a1100988f6462feb0dd092dc8c54" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.103631 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.172112 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-utilities\") pod \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.172296 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kk7j\" (UniqueName: \"kubernetes.io/projected/7aeee0fe-2c9f-4a03-aee0-53d572933e64-kube-api-access-4kk7j\") pod \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.172435 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-catalog-content\") pod \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\" (UID: \"7aeee0fe-2c9f-4a03-aee0-53d572933e64\") " Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.174065 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-utilities" (OuterVolumeSpecName: "utilities") pod "7aeee0fe-2c9f-4a03-aee0-53d572933e64" (UID: "7aeee0fe-2c9f-4a03-aee0-53d572933e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.182046 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aeee0fe-2c9f-4a03-aee0-53d572933e64-kube-api-access-4kk7j" (OuterVolumeSpecName: "kube-api-access-4kk7j") pod "7aeee0fe-2c9f-4a03-aee0-53d572933e64" (UID: "7aeee0fe-2c9f-4a03-aee0-53d572933e64"). InnerVolumeSpecName "kube-api-access-4kk7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.253427 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aeee0fe-2c9f-4a03-aee0-53d572933e64" (UID: "7aeee0fe-2c9f-4a03-aee0-53d572933e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.275711 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.275760 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kk7j\" (UniqueName: \"kubernetes.io/projected/7aeee0fe-2c9f-4a03-aee0-53d572933e64-kube-api-access-4kk7j\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:05 crc kubenswrapper[4799]: I0216 12:45:05.275772 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aeee0fe-2c9f-4a03-aee0-53d572933e64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:06 crc kubenswrapper[4799]: I0216 12:45:06.111975 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5929g" Feb 16 12:45:06 crc kubenswrapper[4799]: I0216 12:45:06.165068 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5929g"] Feb 16 12:45:06 crc kubenswrapper[4799]: I0216 12:45:06.172845 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5929g"] Feb 16 12:45:07 crc kubenswrapper[4799]: I0216 12:45:07.160489 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" path="/var/lib/kubelet/pods/7aeee0fe-2c9f-4a03-aee0-53d572933e64/volumes" Feb 16 12:45:08 crc kubenswrapper[4799]: I0216 12:45:08.143981 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" event={"ID":"a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660","Type":"ContainerStarted","Data":"0e1f1ebca410f83d1251fbbc4776e970196ddf5da3b8ef4588c512424d5b2ae6"} Feb 16 12:45:08 crc kubenswrapper[4799]: I0216 12:45:08.172376 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-9fd4k" podStartSLOduration=1.379054151 podStartE2EDuration="4.172353547s" podCreationTimestamp="2026-02-16 12:45:04 +0000 UTC" firstStartedPulling="2026-02-16 12:45:04.899241426 +0000 UTC m=+810.492256760" lastFinishedPulling="2026-02-16 12:45:07.692540822 +0000 UTC m=+813.285556156" observedRunningTime="2026-02-16 12:45:08.167870351 +0000 UTC m=+813.760885725" watchObservedRunningTime="2026-02-16 12:45:08.172353547 +0000 UTC m=+813.765368881" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.158093 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-prbbx"] Feb 16 12:45:09 crc kubenswrapper[4799]: E0216 12:45:09.158439 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="registry-server" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.158460 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="registry-server" Feb 16 12:45:09 crc kubenswrapper[4799]: E0216 12:45:09.158490 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="extract-utilities" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.158503 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="extract-utilities" Feb 16 12:45:09 crc kubenswrapper[4799]: E0216 12:45:09.158562 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="extract-content" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.158575 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="extract-content" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.158748 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aeee0fe-2c9f-4a03-aee0-53d572933e64" containerName="registry-server" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.159857 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.162276 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jzw6s" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.176561 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-prbbx"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.186073 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.187198 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.191309 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.193072 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8zffw"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.193948 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.217550 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241039 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfc6\" (UniqueName: \"kubernetes.io/projected/cc1669bc-8a99-4bd8-979a-59d07b2cc876-kube-api-access-njfc6\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241093 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-dbus-socket\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241186 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-ovs-socket\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241372 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-nmstate-lock\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241425 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbf7h\" (UniqueName: \"kubernetes.io/projected/3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd-kube-api-access-gbf7h\") pod \"nmstate-metrics-58c85c668d-prbbx\" (UID: \"3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241462 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qpg\" (UniqueName: \"kubernetes.io/projected/ea8a1c06-85d6-40e1-933d-163d4247f147-kube-api-access-w6qpg\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.241492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea8a1c06-85d6-40e1-933d-163d4247f147-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.312215 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.313320 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.315308 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zfg7j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.316009 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.316429 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.323352 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343279 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njfc6\" (UniqueName: \"kubernetes.io/projected/cc1669bc-8a99-4bd8-979a-59d07b2cc876-kube-api-access-njfc6\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343356 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-dbus-socket\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343446 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-ovs-socket\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343497 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-nmstate-lock\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343525 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbf7h\" (UniqueName: \"kubernetes.io/projected/3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd-kube-api-access-gbf7h\") pod \"nmstate-metrics-58c85c668d-prbbx\" (UID: \"3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343553 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qpg\" (UniqueName: \"kubernetes.io/projected/ea8a1c06-85d6-40e1-933d-163d4247f147-kube-api-access-w6qpg\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.343580 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea8a1c06-85d6-40e1-933d-163d4247f147-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: E0216 12:45:09.343772 4799 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 12:45:09 crc kubenswrapper[4799]: E0216 12:45:09.343866 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea8a1c06-85d6-40e1-933d-163d4247f147-tls-key-pair podName:ea8a1c06-85d6-40e1-933d-163d4247f147 nodeName:}" failed. No retries permitted until 2026-02-16 12:45:09.843836071 +0000 UTC m=+815.436851405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ea8a1c06-85d6-40e1-933d-163d4247f147-tls-key-pair") pod "nmstate-webhook-866bcb46dc-v55q4" (UID: "ea8a1c06-85d6-40e1-933d-163d4247f147") : secret "openshift-nmstate-webhook" not found Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.344279 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-nmstate-lock\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.344407 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-ovs-socket\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.344859 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cc1669bc-8a99-4bd8-979a-59d07b2cc876-dbus-socket\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.375679 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qpg\" (UniqueName: \"kubernetes.io/projected/ea8a1c06-85d6-40e1-933d-163d4247f147-kube-api-access-w6qpg\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.388836 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njfc6\" (UniqueName: \"kubernetes.io/projected/cc1669bc-8a99-4bd8-979a-59d07b2cc876-kube-api-access-njfc6\") pod \"nmstate-handler-8zffw\" (UID: \"cc1669bc-8a99-4bd8-979a-59d07b2cc876\") " pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.399853 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbf7h\" (UniqueName: \"kubernetes.io/projected/3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd-kube-api-access-gbf7h\") pod \"nmstate-metrics-58c85c668d-prbbx\" (UID: \"3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.447106 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dd3fb402-ea08-43d2-a79b-81e50caac303-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.447198 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3fb402-ea08-43d2-a79b-81e50caac303-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.447228 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t498j\" (UniqueName: \"kubernetes.io/projected/dd3fb402-ea08-43d2-a79b-81e50caac303-kube-api-access-t498j\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.483673 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.526757 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.548249 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dd3fb402-ea08-43d2-a79b-81e50caac303-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.548633 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3fb402-ea08-43d2-a79b-81e50caac303-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.548739 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t498j\" (UniqueName: \"kubernetes.io/projected/dd3fb402-ea08-43d2-a79b-81e50caac303-kube-api-access-t498j\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.549448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dd3fb402-ea08-43d2-a79b-81e50caac303-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.560897 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3fb402-ea08-43d2-a79b-81e50caac303-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: W0216 12:45:09.570878 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc1669bc_8a99_4bd8_979a_59d07b2cc876.slice/crio-7617e2037b70092b58ade1a3de9d477da07b1ad8ebf1aa8fb28153f63442962d WatchSource:0}: Error finding container 7617e2037b70092b58ade1a3de9d477da07b1ad8ebf1aa8fb28153f63442962d: Status 404 returned error can't find the container with id 7617e2037b70092b58ade1a3de9d477da07b1ad8ebf1aa8fb28153f63442962d Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.586897 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-87dd974dd-dmv9j"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.590169 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t498j\" (UniqueName: \"kubernetes.io/projected/dd3fb402-ea08-43d2-a79b-81e50caac303-kube-api-access-t498j\") pod \"nmstate-console-plugin-5c78fc5d65-x5r6j\" (UID: \"dd3fb402-ea08-43d2-a79b-81e50caac303\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.592292 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.607383 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-87dd974dd-dmv9j"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.628790 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657685 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18656780-eff6-4510-90b7-d9cf48c8fce4-console-oauth-config\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657754 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-oauth-serving-cert\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657776 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18656780-eff6-4510-90b7-d9cf48c8fce4-console-serving-cert\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-trusted-ca-bundle\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657858 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-console-config\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657880 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjbr\" (UniqueName: \"kubernetes.io/projected/18656780-eff6-4510-90b7-d9cf48c8fce4-kube-api-access-cnjbr\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.657902 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-service-ca\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.759939 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjbr\" (UniqueName: \"kubernetes.io/projected/18656780-eff6-4510-90b7-d9cf48c8fce4-kube-api-access-cnjbr\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.760508 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-service-ca\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.760753 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18656780-eff6-4510-90b7-d9cf48c8fce4-console-oauth-config\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.760779 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-oauth-serving-cert\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.760799 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18656780-eff6-4510-90b7-d9cf48c8fce4-console-serving-cert\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.760834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-trusted-ca-bundle\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.760874 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-console-config\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.766251 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-console-config\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.768871 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-trusted-ca-bundle\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.769056 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-oauth-serving-cert\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.769610 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18656780-eff6-4510-90b7-d9cf48c8fce4-console-oauth-config\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.774218 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18656780-eff6-4510-90b7-d9cf48c8fce4-console-serving-cert\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.774806 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18656780-eff6-4510-90b7-d9cf48c8fce4-service-ca\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.790321 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjbr\" (UniqueName: \"kubernetes.io/projected/18656780-eff6-4510-90b7-d9cf48c8fce4-kube-api-access-cnjbr\") pod \"console-87dd974dd-dmv9j\" (UID: \"18656780-eff6-4510-90b7-d9cf48c8fce4\") " pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.802534 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-prbbx"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.863961 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea8a1c06-85d6-40e1-933d-163d4247f147-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.867300 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea8a1c06-85d6-40e1-933d-163d4247f147-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v55q4\" (UID: \"ea8a1c06-85d6-40e1-933d-163d4247f147\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.918547 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j"] Feb 16 12:45:09 crc kubenswrapper[4799]: I0216 12:45:09.933224 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:10 crc kubenswrapper[4799]: I0216 12:45:10.109446 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:10 crc kubenswrapper[4799]: I0216 12:45:10.162958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8zffw" event={"ID":"cc1669bc-8a99-4bd8-979a-59d07b2cc876","Type":"ContainerStarted","Data":"7617e2037b70092b58ade1a3de9d477da07b1ad8ebf1aa8fb28153f63442962d"} Feb 16 12:45:10 crc kubenswrapper[4799]: I0216 12:45:10.164302 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-87dd974dd-dmv9j"] Feb 16 12:45:10 crc kubenswrapper[4799]: I0216 12:45:10.165021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" event={"ID":"3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd","Type":"ContainerStarted","Data":"221f0e41155e18b22a66b4e53c55e6fee6f4963679ae27fabd1709a24222777c"} Feb 16 12:45:10 crc kubenswrapper[4799]: I0216 12:45:10.166390 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" event={"ID":"dd3fb402-ea08-43d2-a79b-81e50caac303","Type":"ContainerStarted","Data":"8a315e597b506ff47836ee8821fec8e0ff78f7e4acfc70399eb25471713a3b46"} Feb 16 12:45:10 crc kubenswrapper[4799]: W0216 12:45:10.177243 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18656780_eff6_4510_90b7_d9cf48c8fce4.slice/crio-fa3ac47a963fd20ab6bd9d88a27b5e6d0198d06a7be23fff18033ac030709572 WatchSource:0}: Error finding container fa3ac47a963fd20ab6bd9d88a27b5e6d0198d06a7be23fff18033ac030709572: Status 404 returned error can't find the container with id fa3ac47a963fd20ab6bd9d88a27b5e6d0198d06a7be23fff18033ac030709572 Feb 16 12:45:10 crc kubenswrapper[4799]: I0216 12:45:10.387892 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4"] Feb 16 12:45:10 crc kubenswrapper[4799]: W0216 12:45:10.398316 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea8a1c06_85d6_40e1_933d_163d4247f147.slice/crio-1857cbeddac1a650ebed24ee85627fed340d2596ca54e6fd8bf46451245824fc WatchSource:0}: Error finding container 1857cbeddac1a650ebed24ee85627fed340d2596ca54e6fd8bf46451245824fc: Status 404 returned error can't find the container with id 1857cbeddac1a650ebed24ee85627fed340d2596ca54e6fd8bf46451245824fc Feb 16 12:45:11 crc kubenswrapper[4799]: I0216 12:45:11.175995 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-87dd974dd-dmv9j" event={"ID":"18656780-eff6-4510-90b7-d9cf48c8fce4","Type":"ContainerStarted","Data":"328d4812d74d2f97ffb1cdfbae40debe9460fb478ee39d37947fe2134755794b"} Feb 16 12:45:11 crc kubenswrapper[4799]: I0216 12:45:11.176503 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-87dd974dd-dmv9j" event={"ID":"18656780-eff6-4510-90b7-d9cf48c8fce4","Type":"ContainerStarted","Data":"fa3ac47a963fd20ab6bd9d88a27b5e6d0198d06a7be23fff18033ac030709572"} Feb 16 12:45:11 crc kubenswrapper[4799]: I0216 12:45:11.189321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" event={"ID":"ea8a1c06-85d6-40e1-933d-163d4247f147","Type":"ContainerStarted","Data":"1857cbeddac1a650ebed24ee85627fed340d2596ca54e6fd8bf46451245824fc"} Feb 16 12:45:11 crc kubenswrapper[4799]: I0216 12:45:11.227859 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-87dd974dd-dmv9j" podStartSLOduration=2.227821099 podStartE2EDuration="2.227821099s" podCreationTimestamp="2026-02-16 12:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:45:11.208872088 +0000 UTC m=+816.801887432" watchObservedRunningTime="2026-02-16 12:45:11.227821099 +0000 UTC m=+816.820836443" Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.211831 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" event={"ID":"3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd","Type":"ContainerStarted","Data":"9c782fd8114681460f0c53130d9fae05db2f0dd7cd32ab5ccb3e2397ef1b48e5"} Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.213852 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8zffw" event={"ID":"cc1669bc-8a99-4bd8-979a-59d07b2cc876","Type":"ContainerStarted","Data":"e9941c2004ccb61d9c7c3e210ee225a78cea413437a210129d236bbfe2961f78"} Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.214007 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.215251 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" event={"ID":"ea8a1c06-85d6-40e1-933d-163d4247f147","Type":"ContainerStarted","Data":"a2dcc98a934abe041c08e66312f23158f7f11459cf9df25095e3245a1b9a5137"} Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.215449 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.234643 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8zffw" podStartSLOduration=1.371451224 podStartE2EDuration="5.234613877s" podCreationTimestamp="2026-02-16 12:45:09 +0000 UTC" firstStartedPulling="2026-02-16 12:45:09.581005677 +0000 UTC m=+815.174021011" lastFinishedPulling="2026-02-16 12:45:13.4441683 +0000 UTC m=+819.037183664" observedRunningTime="2026-02-16 12:45:14.232681593 +0000 UTC m=+819.825696937" watchObservedRunningTime="2026-02-16 12:45:14.234613877 +0000 UTC m=+819.827629221" Feb 16 12:45:14 crc kubenswrapper[4799]: I0216 12:45:14.256331 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" podStartSLOduration=2.253752685 podStartE2EDuration="5.256305595s" podCreationTimestamp="2026-02-16 12:45:09 +0000 UTC" firstStartedPulling="2026-02-16 12:45:10.402310428 +0000 UTC m=+815.995325762" lastFinishedPulling="2026-02-16 12:45:13.404863328 +0000 UTC m=+818.997878672" observedRunningTime="2026-02-16 12:45:14.25614225 +0000 UTC m=+819.849157574" watchObservedRunningTime="2026-02-16 12:45:14.256305595 +0000 UTC m=+819.849320929" Feb 16 12:45:17 crc kubenswrapper[4799]: I0216 12:45:17.245611 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" event={"ID":"3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd","Type":"ContainerStarted","Data":"d279ba345923da2239868ccacf51819cae3055df083f85a5df957fc220e0b9a9"} Feb 16 12:45:17 crc kubenswrapper[4799]: I0216 12:45:17.265756 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-prbbx" podStartSLOduration=1.7576808050000001 podStartE2EDuration="8.265730926s" podCreationTimestamp="2026-02-16 12:45:09 +0000 UTC" firstStartedPulling="2026-02-16 12:45:09.823034688 +0000 UTC m=+815.416050022" lastFinishedPulling="2026-02-16 12:45:16.331084789 +0000 UTC m=+821.924100143" observedRunningTime="2026-02-16 12:45:17.263329249 +0000 UTC m=+822.856344583" watchObservedRunningTime="2026-02-16 12:45:17.265730926 +0000 UTC m=+822.858746250" Feb 16 12:45:18 crc kubenswrapper[4799]: I0216 12:45:18.256652 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" event={"ID":"dd3fb402-ea08-43d2-a79b-81e50caac303","Type":"ContainerStarted","Data":"6235b0e94a3e8c9ecb51fe3ce84936eab73a12b99e7a77637f3688aee9f1cb32"} Feb 16 12:45:18 crc kubenswrapper[4799]: I0216 12:45:18.279814 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-x5r6j" podStartSLOduration=1.889509961 podStartE2EDuration="9.2797791s" podCreationTimestamp="2026-02-16 12:45:09 +0000 UTC" firstStartedPulling="2026-02-16 12:45:09.926192549 +0000 UTC m=+815.519207883" lastFinishedPulling="2026-02-16 12:45:17.316461688 +0000 UTC m=+822.909477022" observedRunningTime="2026-02-16 12:45:18.274658586 +0000 UTC m=+823.867673960" watchObservedRunningTime="2026-02-16 12:45:18.2797791 +0000 UTC m=+823.872794474" Feb 16 12:45:19 crc kubenswrapper[4799]: I0216 12:45:19.567300 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8zffw" Feb 16 12:45:19 crc kubenswrapper[4799]: I0216 12:45:19.933697 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:19 crc kubenswrapper[4799]: I0216 12:45:19.933847 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:19 crc kubenswrapper[4799]: I0216 12:45:19.941296 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:20 crc kubenswrapper[4799]: I0216 12:45:20.293959 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-87dd974dd-dmv9j" Feb 16 12:45:20 crc kubenswrapper[4799]: I0216 12:45:20.386381 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kkq5f"] Feb 16 12:45:21 crc kubenswrapper[4799]: I0216 12:45:21.793212 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:45:21 crc kubenswrapper[4799]: I0216 12:45:21.793833 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:45:30 crc kubenswrapper[4799]: I0216 12:45:30.116996 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v55q4" Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.457378 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kkq5f" podUID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" containerName="console" containerID="cri-o://35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6" gracePeriod=15 Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.884891 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kkq5f_06ffe670-ee53-44df-bf3c-6d2f7c42f7d9/console/0.log" Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.885379 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998254 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-trusted-ca-bundle\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998303 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-oauth-serving-cert\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998351 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzq8c\" (UniqueName: \"kubernetes.io/projected/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-kube-api-access-hzq8c\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998391 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-serving-cert\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998431 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-service-ca\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998466 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-oauth-config\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.998544 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-config\") pod \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\" (UID: \"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9\") " Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.999710 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:45:45 crc kubenswrapper[4799]: I0216 12:45:45.999760 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-service-ca" (OuterVolumeSpecName: "service-ca") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.000236 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-config" (OuterVolumeSpecName: "console-config") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.000673 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.006011 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.006395 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.006529 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-kube-api-access-hzq8c" (OuterVolumeSpecName: "kube-api-access-hzq8c") pod "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" (UID: "06ffe670-ee53-44df-bf3c-6d2f7c42f7d9"). InnerVolumeSpecName "kube-api-access-hzq8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100257 4799 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100334 4799 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100343 4799 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100352 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzq8c\" (UniqueName: \"kubernetes.io/projected/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-kube-api-access-hzq8c\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100364 4799 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100374 4799 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.100382 4799 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.262261 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt"] Feb 16 12:45:46 crc kubenswrapper[4799]: E0216 12:45:46.262619 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" containerName="console" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.262640 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" containerName="console" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.262801 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" containerName="console" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.263753 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.266206 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.275302 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt"] Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.404225 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.404285 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxqfd\" (UniqueName: \"kubernetes.io/projected/ffddb3c3-fb7b-447a-8b54-ae12f9488514-kube-api-access-hxqfd\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.404403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.505259 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.505323 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxqfd\" (UniqueName: \"kubernetes.io/projected/ffddb3c3-fb7b-447a-8b54-ae12f9488514-kube-api-access-hxqfd\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.505381 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.506122 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.506162 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.527745 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxqfd\" (UniqueName: \"kubernetes.io/projected/ffddb3c3-fb7b-447a-8b54-ae12f9488514-kube-api-access-hxqfd\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.548573 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kkq5f_06ffe670-ee53-44df-bf3c-6d2f7c42f7d9/console/0.log" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.548662 4799 generic.go:334] "Generic (PLEG): container finished" podID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" containerID="35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6" exitCode=2 Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.548714 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkq5f" event={"ID":"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9","Type":"ContainerDied","Data":"35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6"} Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.548763 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkq5f" event={"ID":"06ffe670-ee53-44df-bf3c-6d2f7c42f7d9","Type":"ContainerDied","Data":"be775679dafb19c7978db06ad371464b255f7d69d4cc765785bcd12b519734f6"} Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.548802 4799 scope.go:117] "RemoveContainer" containerID="35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.548829 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkq5f" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.585368 4799 scope.go:117] "RemoveContainer" containerID="35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6" Feb 16 12:45:46 crc kubenswrapper[4799]: E0216 12:45:46.586419 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6\": container with ID starting with 35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6 not found: ID does not exist" containerID="35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.586459 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6"} err="failed to get container status \"35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6\": rpc error: code = NotFound desc = could not find container \"35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6\": container with ID starting with 35d4724ddfdee576aa821ef2315e317874bf96a4b0d2bdf435ace482291780c6 not found: ID does not exist" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.591894 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.645224 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kkq5f"] Feb 16 12:45:46 crc kubenswrapper[4799]: I0216 12:45:46.658055 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kkq5f"] Feb 16 12:45:47 crc kubenswrapper[4799]: I0216 12:45:47.115741 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt"] Feb 16 12:45:47 crc kubenswrapper[4799]: W0216 12:45:47.138396 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffddb3c3_fb7b_447a_8b54_ae12f9488514.slice/crio-e7e647e4c43f7256d7c973a8890ac75102e0615d1c9513e466831d21ebbc2744 WatchSource:0}: Error finding container e7e647e4c43f7256d7c973a8890ac75102e0615d1c9513e466831d21ebbc2744: Status 404 returned error can't find the container with id e7e647e4c43f7256d7c973a8890ac75102e0615d1c9513e466831d21ebbc2744 Feb 16 12:45:47 crc kubenswrapper[4799]: I0216 12:45:47.164019 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ffe670-ee53-44df-bf3c-6d2f7c42f7d9" path="/var/lib/kubelet/pods/06ffe670-ee53-44df-bf3c-6d2f7c42f7d9/volumes" Feb 16 12:45:47 crc kubenswrapper[4799]: I0216 12:45:47.562363 4799 generic.go:334] "Generic (PLEG): container finished" podID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerID="0fa5cfbb92eecd0b01ccb896cd8f909a90032d19903fbddb03870c6276d0f63d" exitCode=0 Feb 16 12:45:47 crc kubenswrapper[4799]: I0216 12:45:47.562437 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" event={"ID":"ffddb3c3-fb7b-447a-8b54-ae12f9488514","Type":"ContainerDied","Data":"0fa5cfbb92eecd0b01ccb896cd8f909a90032d19903fbddb03870c6276d0f63d"} Feb 16 12:45:47 crc kubenswrapper[4799]: I0216 12:45:47.562480 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" event={"ID":"ffddb3c3-fb7b-447a-8b54-ae12f9488514","Type":"ContainerStarted","Data":"e7e647e4c43f7256d7c973a8890ac75102e0615d1c9513e466831d21ebbc2744"} Feb 16 12:45:49 crc kubenswrapper[4799]: I0216 12:45:49.581331 4799 generic.go:334] "Generic (PLEG): container finished" podID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerID="16cb36ac268db1d137c60547168571bb68bfcfb41cfdf3973223cea3058df14e" exitCode=0 Feb 16 12:45:49 crc kubenswrapper[4799]: I0216 12:45:49.581471 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" event={"ID":"ffddb3c3-fb7b-447a-8b54-ae12f9488514","Type":"ContainerDied","Data":"16cb36ac268db1d137c60547168571bb68bfcfb41cfdf3973223cea3058df14e"} Feb 16 12:45:50 crc kubenswrapper[4799]: I0216 12:45:50.596286 4799 generic.go:334] "Generic (PLEG): container finished" podID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerID="ba60b9ce74fcbdcc1ee604b6329448df723621f64dfe81486b70b16b4e45dc8b" exitCode=0 Feb 16 12:45:50 crc kubenswrapper[4799]: I0216 12:45:50.596451 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" event={"ID":"ffddb3c3-fb7b-447a-8b54-ae12f9488514","Type":"ContainerDied","Data":"ba60b9ce74fcbdcc1ee604b6329448df723621f64dfe81486b70b16b4e45dc8b"} Feb 16 12:45:51 crc kubenswrapper[4799]: I0216 12:45:51.793231 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:45:51 crc kubenswrapper[4799]: I0216 12:45:51.793892 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:45:51 crc kubenswrapper[4799]: I0216 12:45:51.793978 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:45:51 crc kubenswrapper[4799]: I0216 12:45:51.795021 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86245d72136a5128ea7329ec812aaf474d9f9a0b7cefc3d679dd266cf69dce8f"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:45:51 crc kubenswrapper[4799]: I0216 12:45:51.795158 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://86245d72136a5128ea7329ec812aaf474d9f9a0b7cefc3d679dd266cf69dce8f" gracePeriod=600 Feb 16 12:45:51 crc kubenswrapper[4799]: I0216 12:45:51.882523 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:51.992111 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxqfd\" (UniqueName: \"kubernetes.io/projected/ffddb3c3-fb7b-447a-8b54-ae12f9488514-kube-api-access-hxqfd\") pod \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:51.992282 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-bundle\") pod \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:51.992338 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-util\") pod \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\" (UID: \"ffddb3c3-fb7b-447a-8b54-ae12f9488514\") " Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:51.994774 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-bundle" (OuterVolumeSpecName: "bundle") pod "ffddb3c3-fb7b-447a-8b54-ae12f9488514" (UID: "ffddb3c3-fb7b-447a-8b54-ae12f9488514"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.003089 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffddb3c3-fb7b-447a-8b54-ae12f9488514-kube-api-access-hxqfd" (OuterVolumeSpecName: "kube-api-access-hxqfd") pod "ffddb3c3-fb7b-447a-8b54-ae12f9488514" (UID: "ffddb3c3-fb7b-447a-8b54-ae12f9488514"). InnerVolumeSpecName "kube-api-access-hxqfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.006564 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-util" (OuterVolumeSpecName: "util") pod "ffddb3c3-fb7b-447a-8b54-ae12f9488514" (UID: "ffddb3c3-fb7b-447a-8b54-ae12f9488514"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.095175 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxqfd\" (UniqueName: \"kubernetes.io/projected/ffddb3c3-fb7b-447a-8b54-ae12f9488514-kube-api-access-hxqfd\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.095257 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.095281 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffddb3c3-fb7b-447a-8b54-ae12f9488514-util\") on node \"crc\" DevicePath \"\"" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.612165 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="86245d72136a5128ea7329ec812aaf474d9f9a0b7cefc3d679dd266cf69dce8f" exitCode=0 Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.612245 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"86245d72136a5128ea7329ec812aaf474d9f9a0b7cefc3d679dd266cf69dce8f"} Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.612278 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"34c6876ea0db42f2332afd913f232568333ea876303d83a249ce58ef9abe96d8"} Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.612300 4799 scope.go:117] "RemoveContainer" containerID="ba06c19342df98d380d31640088ece96cb12ba32a0f9050891bda640b4c7c600" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.620990 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" event={"ID":"ffddb3c3-fb7b-447a-8b54-ae12f9488514","Type":"ContainerDied","Data":"e7e647e4c43f7256d7c973a8890ac75102e0615d1c9513e466831d21ebbc2744"} Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.621060 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7e647e4c43f7256d7c973a8890ac75102e0615d1c9513e466831d21ebbc2744" Feb 16 12:45:52 crc kubenswrapper[4799]: I0216 12:45:52.621076 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.264526 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz"] Feb 16 12:46:01 crc kubenswrapper[4799]: E0216 12:46:01.265779 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="pull" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.265795 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="pull" Feb 16 12:46:01 crc kubenswrapper[4799]: E0216 12:46:01.265814 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="extract" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.265820 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="extract" Feb 16 12:46:01 crc kubenswrapper[4799]: E0216 12:46:01.265834 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="util" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.265842 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="util" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.265954 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffddb3c3-fb7b-447a-8b54-ae12f9488514" containerName="extract" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.266501 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.269795 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.269927 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.270093 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.270385 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-swbw6" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.270535 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.281737 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz"] Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.429526 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4af8dbaa-4279-4669-ac62-b78ae77d4063-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.429624 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbsh\" (UniqueName: \"kubernetes.io/projected/4af8dbaa-4279-4669-ac62-b78ae77d4063-kube-api-access-xjbsh\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.429735 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4af8dbaa-4279-4669-ac62-b78ae77d4063-webhook-cert\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.531156 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4af8dbaa-4279-4669-ac62-b78ae77d4063-webhook-cert\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.531615 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4af8dbaa-4279-4669-ac62-b78ae77d4063-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.531674 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbsh\" (UniqueName: \"kubernetes.io/projected/4af8dbaa-4279-4669-ac62-b78ae77d4063-kube-api-access-xjbsh\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.542638 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4af8dbaa-4279-4669-ac62-b78ae77d4063-webhook-cert\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.559937 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4af8dbaa-4279-4669-ac62-b78ae77d4063-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.560052 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbsh\" (UniqueName: \"kubernetes.io/projected/4af8dbaa-4279-4669-ac62-b78ae77d4063-kube-api-access-xjbsh\") pod \"metallb-operator-controller-manager-6c7df86bbf-sjqnz\" (UID: \"4af8dbaa-4279-4669-ac62-b78ae77d4063\") " pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.585593 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.627858 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg"] Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.628724 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.643843 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n5n99" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.644106 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.644202 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.709162 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg"] Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.735640 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8lm\" (UniqueName: \"kubernetes.io/projected/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-kube-api-access-ng8lm\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.735729 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-webhook-cert\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.736012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-apiservice-cert\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.837268 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-apiservice-cert\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.837640 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8lm\" (UniqueName: \"kubernetes.io/projected/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-kube-api-access-ng8lm\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.837670 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-webhook-cert\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.848731 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-apiservice-cert\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.851946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-webhook-cert\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.865582 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8lm\" (UniqueName: \"kubernetes.io/projected/11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3-kube-api-access-ng8lm\") pod \"metallb-operator-webhook-server-67d76b6b75-prfvg\" (UID: \"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3\") " pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.958408 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz"] Feb 16 12:46:01 crc kubenswrapper[4799]: I0216 12:46:01.967446 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:02 crc kubenswrapper[4799]: I0216 12:46:02.241310 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg"] Feb 16 12:46:02 crc kubenswrapper[4799]: W0216 12:46:02.252209 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d39ab5_f7dc_4a0f_8746_5ec23ce4c7d3.slice/crio-472bdfe8b53b467685ae61f2d5f8dc3dcce9f4c87dddab2817873b2f22510b72 WatchSource:0}: Error finding container 472bdfe8b53b467685ae61f2d5f8dc3dcce9f4c87dddab2817873b2f22510b72: Status 404 returned error can't find the container with id 472bdfe8b53b467685ae61f2d5f8dc3dcce9f4c87dddab2817873b2f22510b72 Feb 16 12:46:02 crc kubenswrapper[4799]: I0216 12:46:02.704011 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" event={"ID":"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3","Type":"ContainerStarted","Data":"472bdfe8b53b467685ae61f2d5f8dc3dcce9f4c87dddab2817873b2f22510b72"} Feb 16 12:46:02 crc kubenswrapper[4799]: I0216 12:46:02.705393 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" event={"ID":"4af8dbaa-4279-4669-ac62-b78ae77d4063","Type":"ContainerStarted","Data":"28df37524b0c5d06f6d989605540e80a5d795ea93f89cae5e8186d7ee281ca9e"} Feb 16 12:46:07 crc kubenswrapper[4799]: I0216 12:46:07.751352 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" event={"ID":"11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3","Type":"ContainerStarted","Data":"88512613c396c35b6b22a339ff47fbb9809e0e73ad653ef749693113a2e8b0f4"} Feb 16 12:46:07 crc kubenswrapper[4799]: I0216 12:46:07.752100 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:07 crc kubenswrapper[4799]: I0216 12:46:07.755956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" event={"ID":"4af8dbaa-4279-4669-ac62-b78ae77d4063","Type":"ContainerStarted","Data":"94717e2922bba95a9005e3201e159f841a4fd1e76f2202e4beca883232edc724"} Feb 16 12:46:07 crc kubenswrapper[4799]: I0216 12:46:07.756577 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:07 crc kubenswrapper[4799]: I0216 12:46:07.778648 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" podStartSLOduration=1.745473578 podStartE2EDuration="6.778623174s" podCreationTimestamp="2026-02-16 12:46:01 +0000 UTC" firstStartedPulling="2026-02-16 12:46:02.257915818 +0000 UTC m=+867.850931152" lastFinishedPulling="2026-02-16 12:46:07.291065374 +0000 UTC m=+872.884080748" observedRunningTime="2026-02-16 12:46:07.773273014 +0000 UTC m=+873.366288358" watchObservedRunningTime="2026-02-16 12:46:07.778623174 +0000 UTC m=+873.371638528" Feb 16 12:46:07 crc kubenswrapper[4799]: I0216 12:46:07.796498 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" podStartSLOduration=1.529223979 podStartE2EDuration="6.796474224s" podCreationTimestamp="2026-02-16 12:46:01 +0000 UTC" firstStartedPulling="2026-02-16 12:46:01.985700171 +0000 UTC m=+867.578715495" lastFinishedPulling="2026-02-16 12:46:07.252950366 +0000 UTC m=+872.845965740" observedRunningTime="2026-02-16 12:46:07.790950659 +0000 UTC m=+873.383965993" watchObservedRunningTime="2026-02-16 12:46:07.796474224 +0000 UTC m=+873.389489568" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.390722 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m8ckl"] Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.392244 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.436512 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8ckl"] Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.452637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-catalog-content\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.452712 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vlh8\" (UniqueName: \"kubernetes.io/projected/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-kube-api-access-4vlh8\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.452793 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-utilities\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.553974 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-catalog-content\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.554079 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vlh8\" (UniqueName: \"kubernetes.io/projected/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-kube-api-access-4vlh8\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.554163 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-utilities\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.554973 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-utilities\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.555107 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-catalog-content\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.590158 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vlh8\" (UniqueName: \"kubernetes.io/projected/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-kube-api-access-4vlh8\") pod \"redhat-marketplace-m8ckl\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:08 crc kubenswrapper[4799]: I0216 12:46:08.723728 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:09 crc kubenswrapper[4799]: I0216 12:46:09.220074 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8ckl"] Feb 16 12:46:09 crc kubenswrapper[4799]: I0216 12:46:09.779269 4799 generic.go:334] "Generic (PLEG): container finished" podID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerID="b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1" exitCode=0 Feb 16 12:46:09 crc kubenswrapper[4799]: I0216 12:46:09.779374 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerDied","Data":"b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1"} Feb 16 12:46:09 crc kubenswrapper[4799]: I0216 12:46:09.779863 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerStarted","Data":"78be855c7c4d3bac818ffb3155bf1d9cd35a418293ad2d0ea6b390e72b8d8247"} Feb 16 12:46:10 crc kubenswrapper[4799]: I0216 12:46:10.789382 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerStarted","Data":"4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2"} Feb 16 12:46:11 crc kubenswrapper[4799]: I0216 12:46:11.801733 4799 generic.go:334] "Generic (PLEG): container finished" podID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerID="4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2" exitCode=0 Feb 16 12:46:11 crc kubenswrapper[4799]: I0216 12:46:11.801810 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerDied","Data":"4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2"} Feb 16 12:46:13 crc kubenswrapper[4799]: I0216 12:46:13.816661 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerStarted","Data":"4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea"} Feb 16 12:46:13 crc kubenswrapper[4799]: I0216 12:46:13.836359 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m8ckl" podStartSLOduration=2.583191187 podStartE2EDuration="5.836335698s" podCreationTimestamp="2026-02-16 12:46:08 +0000 UTC" firstStartedPulling="2026-02-16 12:46:09.782570784 +0000 UTC m=+875.375586118" lastFinishedPulling="2026-02-16 12:46:13.035715305 +0000 UTC m=+878.628730629" observedRunningTime="2026-02-16 12:46:13.833013715 +0000 UTC m=+879.426029069" watchObservedRunningTime="2026-02-16 12:46:13.836335698 +0000 UTC m=+879.429351032" Feb 16 12:46:18 crc kubenswrapper[4799]: I0216 12:46:18.723876 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:18 crc kubenswrapper[4799]: I0216 12:46:18.724529 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:18 crc kubenswrapper[4799]: I0216 12:46:18.780903 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:18 crc kubenswrapper[4799]: I0216 12:46:18.927926 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:21 crc kubenswrapper[4799]: I0216 12:46:21.167733 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8ckl"] Feb 16 12:46:21 crc kubenswrapper[4799]: I0216 12:46:21.881374 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m8ckl" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="registry-server" containerID="cri-o://4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea" gracePeriod=2 Feb 16 12:46:21 crc kubenswrapper[4799]: I0216 12:46:21.974946 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-67d76b6b75-prfvg" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.311524 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.471725 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-catalog-content\") pod \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.471816 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vlh8\" (UniqueName: \"kubernetes.io/projected/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-kube-api-access-4vlh8\") pod \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.471899 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-utilities\") pod \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\" (UID: \"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb\") " Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.473625 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-utilities" (OuterVolumeSpecName: "utilities") pod "35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" (UID: "35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.486054 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-kube-api-access-4vlh8" (OuterVolumeSpecName: "kube-api-access-4vlh8") pod "35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" (UID: "35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb"). InnerVolumeSpecName "kube-api-access-4vlh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.528981 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" (UID: "35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.573746 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.574052 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.574192 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vlh8\" (UniqueName: \"kubernetes.io/projected/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb-kube-api-access-4vlh8\") on node \"crc\" DevicePath \"\"" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.889260 4799 generic.go:334] "Generic (PLEG): container finished" podID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerID="4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea" exitCode=0 Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.889322 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8ckl" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.889689 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerDied","Data":"4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea"} Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.889812 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8ckl" event={"ID":"35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb","Type":"ContainerDied","Data":"78be855c7c4d3bac818ffb3155bf1d9cd35a418293ad2d0ea6b390e72b8d8247"} Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.889886 4799 scope.go:117] "RemoveContainer" containerID="4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.909161 4799 scope.go:117] "RemoveContainer" containerID="4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.924314 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8ckl"] Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.932037 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8ckl"] Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.933065 4799 scope.go:117] "RemoveContainer" containerID="b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.958642 4799 scope.go:117] "RemoveContainer" containerID="4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea" Feb 16 12:46:22 crc kubenswrapper[4799]: E0216 12:46:22.960746 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea\": container with ID starting with 4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea not found: ID does not exist" containerID="4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.960793 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea"} err="failed to get container status \"4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea\": rpc error: code = NotFound desc = could not find container \"4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea\": container with ID starting with 4719909233b4f4f5e8b9c3e107d7b750e584cb974a90d771c974cd9c6b3574ea not found: ID does not exist" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.960829 4799 scope.go:117] "RemoveContainer" containerID="4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2" Feb 16 12:46:22 crc kubenswrapper[4799]: E0216 12:46:22.961252 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2\": container with ID starting with 4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2 not found: ID does not exist" containerID="4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.961282 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2"} err="failed to get container status \"4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2\": rpc error: code = NotFound desc = could not find container \"4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2\": container with ID starting with 4f7043baa7f320671e6297e0d56d6be8ec04344fdb79fef45b80a5ce9a7c40c2 not found: ID does not exist" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.961311 4799 scope.go:117] "RemoveContainer" containerID="b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1" Feb 16 12:46:22 crc kubenswrapper[4799]: E0216 12:46:22.962161 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1\": container with ID starting with b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1 not found: ID does not exist" containerID="b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1" Feb 16 12:46:22 crc kubenswrapper[4799]: I0216 12:46:22.962249 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1"} err="failed to get container status \"b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1\": rpc error: code = NotFound desc = could not find container \"b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1\": container with ID starting with b3a840c6ea201760a7c2c10be5c271b9cdf3665ffbb7662b82a612c8832998c1 not found: ID does not exist" Feb 16 12:46:23 crc kubenswrapper[4799]: I0216 12:46:23.161585 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" path="/var/lib/kubelet/pods/35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb/volumes" Feb 16 12:46:41 crc kubenswrapper[4799]: I0216 12:46:41.590857 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c7df86bbf-sjqnz" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.371928 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fmgnv"] Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.372825 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="extract-content" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.372848 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="extract-content" Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.372867 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="extract-utilities" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.372875 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="extract-utilities" Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.372882 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="registry-server" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.372889 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="registry-server" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.373008 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f89df6-e8e2-44bc-a3f5-3f54f01d1ebb" containerName="registry-server" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.375191 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.378760 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.378969 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hs2nk" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.382722 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.386214 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr"] Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.387721 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.389810 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.400470 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr"] Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506013 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpr4n\" (UniqueName: \"kubernetes.io/projected/4c963766-8661-4a44-8416-f0202f10fafb-kube-api-access-mpr4n\") pod \"frr-k8s-webhook-server-78b44bf5bb-qrqgr\" (UID: \"4c963766-8661-4a44-8416-f0202f10fafb\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506081 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e20c8664-edbd-4e42-96e9-da19e197b232-frr-startup\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506107 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-reloader\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506216 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-frr-conf\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506238 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e20c8664-edbd-4e42-96e9-da19e197b232-metrics-certs\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506268 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-frr-sockets\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506285 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-metrics\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506304 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c963766-8661-4a44-8416-f0202f10fafb-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qrqgr\" (UID: \"4c963766-8661-4a44-8416-f0202f10fafb\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.506326 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh5q9\" (UniqueName: \"kubernetes.io/projected/e20c8664-edbd-4e42-96e9-da19e197b232-kube-api-access-fh5q9\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.508299 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jcvfs"] Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.510134 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.515474 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.515756 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.515825 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-52299" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.515924 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.525404 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-4djwq"] Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.526803 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.535061 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.590985 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-4djwq"] Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616546 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-frr-conf\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616600 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e20c8664-edbd-4e42-96e9-da19e197b232-metrics-certs\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616632 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-frr-sockets\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616666 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-metrics\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616691 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c963766-8661-4a44-8416-f0202f10fafb-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qrqgr\" (UID: \"4c963766-8661-4a44-8416-f0202f10fafb\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh5q9\" (UniqueName: \"kubernetes.io/projected/e20c8664-edbd-4e42-96e9-da19e197b232-kube-api-access-fh5q9\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616775 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpr4n\" (UniqueName: \"kubernetes.io/projected/4c963766-8661-4a44-8416-f0202f10fafb-kube-api-access-mpr4n\") pod \"frr-k8s-webhook-server-78b44bf5bb-qrqgr\" (UID: \"4c963766-8661-4a44-8416-f0202f10fafb\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616804 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e20c8664-edbd-4e42-96e9-da19e197b232-frr-startup\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.616834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-reloader\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.617427 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-reloader\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.617540 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-metrics\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.617769 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-frr-conf\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.618430 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e20c8664-edbd-4e42-96e9-da19e197b232-frr-sockets\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.619103 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e20c8664-edbd-4e42-96e9-da19e197b232-frr-startup\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.648391 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c963766-8661-4a44-8416-f0202f10fafb-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qrqgr\" (UID: \"4c963766-8661-4a44-8416-f0202f10fafb\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.649691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e20c8664-edbd-4e42-96e9-da19e197b232-metrics-certs\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.655406 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpr4n\" (UniqueName: \"kubernetes.io/projected/4c963766-8661-4a44-8416-f0202f10fafb-kube-api-access-mpr4n\") pod \"frr-k8s-webhook-server-78b44bf5bb-qrqgr\" (UID: \"4c963766-8661-4a44-8416-f0202f10fafb\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.669931 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh5q9\" (UniqueName: \"kubernetes.io/projected/e20c8664-edbd-4e42-96e9-da19e197b232-kube-api-access-fh5q9\") pod \"frr-k8s-fmgnv\" (UID: \"e20c8664-edbd-4e42-96e9-da19e197b232\") " pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.701395 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.717485 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720484 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-metrics-certs\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720567 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk289\" (UniqueName: \"kubernetes.io/projected/00530bae-1878-49a9-876f-97b521db61cd-kube-api-access-qk289\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720631 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c54deb12-6083-4890-ab2d-20c5cede1547-metrics-certs\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720677 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c54deb12-6083-4890-ab2d-20c5cede1547-cert\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720703 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26fs\" (UniqueName: \"kubernetes.io/projected/c54deb12-6083-4890-ab2d-20c5cede1547-kube-api-access-t26fs\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720724 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.720747 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/00530bae-1878-49a9-876f-97b521db61cd-metallb-excludel2\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821347 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-metrics-certs\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk289\" (UniqueName: \"kubernetes.io/projected/00530bae-1878-49a9-876f-97b521db61cd-kube-api-access-qk289\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821767 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c54deb12-6083-4890-ab2d-20c5cede1547-metrics-certs\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.821523 4799 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c54deb12-6083-4890-ab2d-20c5cede1547-cert\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821809 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t26fs\" (UniqueName: \"kubernetes.io/projected/c54deb12-6083-4890-ab2d-20c5cede1547-kube-api-access-t26fs\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821827 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.821865 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-metrics-certs podName:00530bae-1878-49a9-876f-97b521db61cd nodeName:}" failed. No retries permitted until 2026-02-16 12:46:43.321838252 +0000 UTC m=+908.914853586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-metrics-certs") pod "speaker-jcvfs" (UID: "00530bae-1878-49a9-876f-97b521db61cd") : secret "speaker-certs-secret" not found Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.821899 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/00530bae-1878-49a9-876f-97b521db61cd-metallb-excludel2\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.821929 4799 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 12:46:42 crc kubenswrapper[4799]: E0216 12:46:42.822003 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist podName:00530bae-1878-49a9-876f-97b521db61cd nodeName:}" failed. No retries permitted until 2026-02-16 12:46:43.321977786 +0000 UTC m=+908.914993120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist") pod "speaker-jcvfs" (UID: "00530bae-1878-49a9-876f-97b521db61cd") : secret "metallb-memberlist" not found Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.822845 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/00530bae-1878-49a9-876f-97b521db61cd-metallb-excludel2\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.826479 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c54deb12-6083-4890-ab2d-20c5cede1547-metrics-certs\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.830325 4799 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.836762 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c54deb12-6083-4890-ab2d-20c5cede1547-cert\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.845588 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26fs\" (UniqueName: \"kubernetes.io/projected/c54deb12-6083-4890-ab2d-20c5cede1547-kube-api-access-t26fs\") pod \"controller-69bbfbf88f-4djwq\" (UID: \"c54deb12-6083-4890-ab2d-20c5cede1547\") " pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:42 crc kubenswrapper[4799]: I0216 12:46:42.845598 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk289\" (UniqueName: \"kubernetes.io/projected/00530bae-1878-49a9-876f-97b521db61cd-kube-api-access-qk289\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.071238 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"82ba85bc56e2d8292c1763c490f00c1e827a6c4877b3e48d6d9e59c2e96e8f38"} Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.144518 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.176670 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr"] Feb 16 12:46:43 crc kubenswrapper[4799]: W0216 12:46:43.190837 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c963766_8661_4a44_8416_f0202f10fafb.slice/crio-df717dc2eac890c38dee701e12f1ea3c3b57b3a016ee5d5d3b3836cf838d36f7 WatchSource:0}: Error finding container df717dc2eac890c38dee701e12f1ea3c3b57b3a016ee5d5d3b3836cf838d36f7: Status 404 returned error can't find the container with id df717dc2eac890c38dee701e12f1ea3c3b57b3a016ee5d5d3b3836cf838d36f7 Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.329244 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.329332 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-metrics-certs\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:43 crc kubenswrapper[4799]: E0216 12:46:43.330278 4799 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 12:46:43 crc kubenswrapper[4799]: E0216 12:46:43.330413 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist podName:00530bae-1878-49a9-876f-97b521db61cd nodeName:}" failed. No retries permitted until 2026-02-16 12:46:44.330379031 +0000 UTC m=+909.923394415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist") pod "speaker-jcvfs" (UID: "00530bae-1878-49a9-876f-97b521db61cd") : secret "metallb-memberlist" not found Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.340325 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-metrics-certs\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:43 crc kubenswrapper[4799]: I0216 12:46:43.440766 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-4djwq"] Feb 16 12:46:43 crc kubenswrapper[4799]: W0216 12:46:43.448845 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc54deb12_6083_4890_ab2d_20c5cede1547.slice/crio-ef0cd0168eb6e28c934ca575c910b9d6e91dbcbd5c991ca04fe2c4f41ae5d2de WatchSource:0}: Error finding container ef0cd0168eb6e28c934ca575c910b9d6e91dbcbd5c991ca04fe2c4f41ae5d2de: Status 404 returned error can't find the container with id ef0cd0168eb6e28c934ca575c910b9d6e91dbcbd5c991ca04fe2c4f41ae5d2de Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.080208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" event={"ID":"4c963766-8661-4a44-8416-f0202f10fafb","Type":"ContainerStarted","Data":"df717dc2eac890c38dee701e12f1ea3c3b57b3a016ee5d5d3b3836cf838d36f7"} Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.083550 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-4djwq" event={"ID":"c54deb12-6083-4890-ab2d-20c5cede1547","Type":"ContainerStarted","Data":"7c0fd3edac9b7758f5cfc2fe2e6de51fd4e1cb86764105f208ea36082189526b"} Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.083649 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-4djwq" event={"ID":"c54deb12-6083-4890-ab2d-20c5cede1547","Type":"ContainerStarted","Data":"b58cac59b581da585f449e889fe32a7c2608aac7c3415ad59d2a617face383c9"} Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.083672 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-4djwq" event={"ID":"c54deb12-6083-4890-ab2d-20c5cede1547","Type":"ContainerStarted","Data":"ef0cd0168eb6e28c934ca575c910b9d6e91dbcbd5c991ca04fe2c4f41ae5d2de"} Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.083812 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.124925 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-4djwq" podStartSLOduration=2.124892823 podStartE2EDuration="2.124892823s" podCreationTimestamp="2026-02-16 12:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:46:44.118728659 +0000 UTC m=+909.711744023" watchObservedRunningTime="2026-02-16 12:46:44.124892823 +0000 UTC m=+909.717908167" Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.350559 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.378993 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00530bae-1878-49a9-876f-97b521db61cd-memberlist\") pod \"speaker-jcvfs\" (UID: \"00530bae-1878-49a9-876f-97b521db61cd\") " pod="metallb-system/speaker-jcvfs" Feb 16 12:46:44 crc kubenswrapper[4799]: I0216 12:46:44.638793 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jcvfs" Feb 16 12:46:45 crc kubenswrapper[4799]: I0216 12:46:45.119890 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jcvfs" event={"ID":"00530bae-1878-49a9-876f-97b521db61cd","Type":"ContainerStarted","Data":"1a7ed3e95c2022363b800a55e05edb8a258e2598eb8634b3f4dc4150123893b8"} Feb 16 12:46:45 crc kubenswrapper[4799]: I0216 12:46:45.119963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jcvfs" event={"ID":"00530bae-1878-49a9-876f-97b521db61cd","Type":"ContainerStarted","Data":"a3e64313c9e2d91aaf9e0f111b244826f55fc22432bd0be265072b71db9761fb"} Feb 16 12:46:46 crc kubenswrapper[4799]: I0216 12:46:46.140140 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jcvfs" event={"ID":"00530bae-1878-49a9-876f-97b521db61cd","Type":"ContainerStarted","Data":"1fa28c15ca1d53592591560600818261c6c2270a3fa3684881cfd272bb81423e"} Feb 16 12:46:46 crc kubenswrapper[4799]: I0216 12:46:46.140334 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jcvfs" Feb 16 12:46:46 crc kubenswrapper[4799]: I0216 12:46:46.176137 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jcvfs" podStartSLOduration=4.176077615 podStartE2EDuration="4.176077615s" podCreationTimestamp="2026-02-16 12:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:46:46.16923273 +0000 UTC m=+911.762248064" watchObservedRunningTime="2026-02-16 12:46:46.176077615 +0000 UTC m=+911.769092979" Feb 16 12:46:52 crc kubenswrapper[4799]: I0216 12:46:52.207830 4799 generic.go:334] "Generic (PLEG): container finished" podID="e20c8664-edbd-4e42-96e9-da19e197b232" containerID="2a8395f98405e809ed7d2ab9de40a97e8ce7c31283e3f7d3e10d43776fde346b" exitCode=0 Feb 16 12:46:52 crc kubenswrapper[4799]: I0216 12:46:52.207960 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerDied","Data":"2a8395f98405e809ed7d2ab9de40a97e8ce7c31283e3f7d3e10d43776fde346b"} Feb 16 12:46:52 crc kubenswrapper[4799]: I0216 12:46:52.211542 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" event={"ID":"4c963766-8661-4a44-8416-f0202f10fafb","Type":"ContainerStarted","Data":"47119633eb66d1f8b6410192682f64f34f61c1e2889328fbfa7e7589cad25c81"} Feb 16 12:46:52 crc kubenswrapper[4799]: I0216 12:46:52.211786 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:46:53 crc kubenswrapper[4799]: I0216 12:46:53.158484 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-4djwq" Feb 16 12:46:53 crc kubenswrapper[4799]: I0216 12:46:53.184986 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" podStartSLOduration=3.315793751 podStartE2EDuration="11.184954915s" podCreationTimestamp="2026-02-16 12:46:42 +0000 UTC" firstStartedPulling="2026-02-16 12:46:43.196650023 +0000 UTC m=+908.789665357" lastFinishedPulling="2026-02-16 12:46:51.065811167 +0000 UTC m=+916.658826521" observedRunningTime="2026-02-16 12:46:52.278559784 +0000 UTC m=+917.871575108" watchObservedRunningTime="2026-02-16 12:46:53.184954915 +0000 UTC m=+918.777970259" Feb 16 12:46:53 crc kubenswrapper[4799]: I0216 12:46:53.221992 4799 generic.go:334] "Generic (PLEG): container finished" podID="e20c8664-edbd-4e42-96e9-da19e197b232" containerID="88a08e715b341ca9540874f7baa5a013176801e4d6a44f62129655359daacc65" exitCode=0 Feb 16 12:46:53 crc kubenswrapper[4799]: I0216 12:46:53.222132 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerDied","Data":"88a08e715b341ca9540874f7baa5a013176801e4d6a44f62129655359daacc65"} Feb 16 12:46:54 crc kubenswrapper[4799]: I0216 12:46:54.233934 4799 generic.go:334] "Generic (PLEG): container finished" podID="e20c8664-edbd-4e42-96e9-da19e197b232" containerID="45dfa77b0dd0d8752d82b6232c4068fcf54a3f2091ebe42bd5cc5c061a930081" exitCode=0 Feb 16 12:46:54 crc kubenswrapper[4799]: I0216 12:46:54.234028 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerDied","Data":"45dfa77b0dd0d8752d82b6232c4068fcf54a3f2091ebe42bd5cc5c061a930081"} Feb 16 12:46:54 crc kubenswrapper[4799]: I0216 12:46:54.646269 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jcvfs" Feb 16 12:46:55 crc kubenswrapper[4799]: I0216 12:46:55.258143 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"febe4b1a68c1d280ce561cbf4193c1b22ef8f541eb445c63360558f7dda3b40c"} Feb 16 12:46:55 crc kubenswrapper[4799]: I0216 12:46:55.258656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"f8a941e73a5af29d27df096ad16bf694c9f8f8035ab9b44c439690e07e850df8"} Feb 16 12:46:55 crc kubenswrapper[4799]: I0216 12:46:55.258668 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"2bbbf07add011d1b86767091d83fa7887a28854124221b1e8571bca2d081142a"} Feb 16 12:46:55 crc kubenswrapper[4799]: I0216 12:46:55.258678 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"63be6f69ca1c131421c31fb3a61cb9e7df53283a9f86f142912252f604894f20"} Feb 16 12:46:55 crc kubenswrapper[4799]: I0216 12:46:55.258686 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"fa0a22951caf3dc5807db8007f7e0d32bbfe96fda1d99e3fc1de293934051138"} Feb 16 12:46:56 crc kubenswrapper[4799]: I0216 12:46:56.270661 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fmgnv" event={"ID":"e20c8664-edbd-4e42-96e9-da19e197b232","Type":"ContainerStarted","Data":"7807f253f7926bc2b72588e0c2e7f0f77c19c14f9c8ae7476b8ed0199cfb6340"} Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.280167 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.321883 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fmgnv" podStartSLOduration=7.151399906 podStartE2EDuration="15.321842317s" podCreationTimestamp="2026-02-16 12:46:42 +0000 UTC" firstStartedPulling="2026-02-16 12:46:42.893668385 +0000 UTC m=+908.486683719" lastFinishedPulling="2026-02-16 12:46:51.064110786 +0000 UTC m=+916.657126130" observedRunningTime="2026-02-16 12:46:57.308744276 +0000 UTC m=+922.901759610" watchObservedRunningTime="2026-02-16 12:46:57.321842317 +0000 UTC m=+922.914857651" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.450623 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9zqll"] Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.451780 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.470704 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.470828 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.470962 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cfwlj" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.482237 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9zqll"] Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.483051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gjnp\" (UniqueName: \"kubernetes.io/projected/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7-kube-api-access-4gjnp\") pod \"openstack-operator-index-9zqll\" (UID: \"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7\") " pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.585029 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gjnp\" (UniqueName: \"kubernetes.io/projected/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7-kube-api-access-4gjnp\") pod \"openstack-operator-index-9zqll\" (UID: \"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7\") " pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.604858 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gjnp\" (UniqueName: \"kubernetes.io/projected/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7-kube-api-access-4gjnp\") pod \"openstack-operator-index-9zqll\" (UID: \"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7\") " pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.702787 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.740226 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:46:57 crc kubenswrapper[4799]: I0216 12:46:57.772375 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:46:58 crc kubenswrapper[4799]: I0216 12:46:58.047770 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9zqll"] Feb 16 12:46:58 crc kubenswrapper[4799]: W0216 12:46:58.054353 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be8d5d1_81b5_4246_9ec3_9e5f67fbe0e7.slice/crio-6cdfa6bc2b65e3bdea55a999dd0d7b44225a18eb401665ec4d751b52bdd0eaa2 WatchSource:0}: Error finding container 6cdfa6bc2b65e3bdea55a999dd0d7b44225a18eb401665ec4d751b52bdd0eaa2: Status 404 returned error can't find the container with id 6cdfa6bc2b65e3bdea55a999dd0d7b44225a18eb401665ec4d751b52bdd0eaa2 Feb 16 12:46:58 crc kubenswrapper[4799]: I0216 12:46:58.304878 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9zqll" event={"ID":"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7","Type":"ContainerStarted","Data":"6cdfa6bc2b65e3bdea55a999dd0d7b44225a18eb401665ec4d751b52bdd0eaa2"} Feb 16 12:47:00 crc kubenswrapper[4799]: I0216 12:47:00.614061 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9zqll"] Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.225700 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pvc2p"] Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.227906 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.238465 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pvc2p"] Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.327076 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9zqll" event={"ID":"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7","Type":"ContainerStarted","Data":"447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc"} Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.327273 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9zqll" podUID="7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" containerName="registry-server" containerID="cri-o://447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc" gracePeriod=2 Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.341637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wgd\" (UniqueName: \"kubernetes.io/projected/29da4bf2-657a-4d9d-b61b-788ef89d4b19-kube-api-access-f7wgd\") pod \"openstack-operator-index-pvc2p\" (UID: \"29da4bf2-657a-4d9d-b61b-788ef89d4b19\") " pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.348781 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9zqll" podStartSLOduration=1.9173834379999999 podStartE2EDuration="4.348735597s" podCreationTimestamp="2026-02-16 12:46:57 +0000 UTC" firstStartedPulling="2026-02-16 12:46:58.059869906 +0000 UTC m=+923.652885240" lastFinishedPulling="2026-02-16 12:47:00.491222065 +0000 UTC m=+926.084237399" observedRunningTime="2026-02-16 12:47:01.345853061 +0000 UTC m=+926.938868445" watchObservedRunningTime="2026-02-16 12:47:01.348735597 +0000 UTC m=+926.941750971" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.443868 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wgd\" (UniqueName: \"kubernetes.io/projected/29da4bf2-657a-4d9d-b61b-788ef89d4b19-kube-api-access-f7wgd\") pod \"openstack-operator-index-pvc2p\" (UID: \"29da4bf2-657a-4d9d-b61b-788ef89d4b19\") " pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.481076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wgd\" (UniqueName: \"kubernetes.io/projected/29da4bf2-657a-4d9d-b61b-788ef89d4b19-kube-api-access-f7wgd\") pod \"openstack-operator-index-pvc2p\" (UID: \"29da4bf2-657a-4d9d-b61b-788ef89d4b19\") " pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.559664 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.819496 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.864919 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pvc2p"] Feb 16 12:47:01 crc kubenswrapper[4799]: W0216 12:47:01.872431 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29da4bf2_657a_4d9d_b61b_788ef89d4b19.slice/crio-d52cd3811e3626eca75bee7a634509a5e832c5a85b889671b1fa818c08b8a7c0 WatchSource:0}: Error finding container d52cd3811e3626eca75bee7a634509a5e832c5a85b889671b1fa818c08b8a7c0: Status 404 returned error can't find the container with id d52cd3811e3626eca75bee7a634509a5e832c5a85b889671b1fa818c08b8a7c0 Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.954303 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gjnp\" (UniqueName: \"kubernetes.io/projected/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7-kube-api-access-4gjnp\") pod \"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7\" (UID: \"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7\") " Feb 16 12:47:01 crc kubenswrapper[4799]: I0216 12:47:01.983470 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7-kube-api-access-4gjnp" (OuterVolumeSpecName: "kube-api-access-4gjnp") pod "7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" (UID: "7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7"). InnerVolumeSpecName "kube-api-access-4gjnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.074636 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gjnp\" (UniqueName: \"kubernetes.io/projected/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7-kube-api-access-4gjnp\") on node \"crc\" DevicePath \"\"" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.336976 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pvc2p" event={"ID":"29da4bf2-657a-4d9d-b61b-788ef89d4b19","Type":"ContainerStarted","Data":"0f36ed9d8f92b856addf355712a6371da21366599f8cc9d39c63317ceb95c604"} Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.337570 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pvc2p" event={"ID":"29da4bf2-657a-4d9d-b61b-788ef89d4b19","Type":"ContainerStarted","Data":"d52cd3811e3626eca75bee7a634509a5e832c5a85b889671b1fa818c08b8a7c0"} Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.339511 4799 generic.go:334] "Generic (PLEG): container finished" podID="7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" containerID="447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc" exitCode=0 Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.339541 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9zqll" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.339555 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9zqll" event={"ID":"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7","Type":"ContainerDied","Data":"447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc"} Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.339884 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9zqll" event={"ID":"7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7","Type":"ContainerDied","Data":"6cdfa6bc2b65e3bdea55a999dd0d7b44225a18eb401665ec4d751b52bdd0eaa2"} Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.339957 4799 scope.go:117] "RemoveContainer" containerID="447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.369527 4799 scope.go:117] "RemoveContainer" containerID="447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.369744 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pvc2p" podStartSLOduration=1.308038245 podStartE2EDuration="1.369719745s" podCreationTimestamp="2026-02-16 12:47:01 +0000 UTC" firstStartedPulling="2026-02-16 12:47:01.876716322 +0000 UTC m=+927.469731656" lastFinishedPulling="2026-02-16 12:47:01.938397822 +0000 UTC m=+927.531413156" observedRunningTime="2026-02-16 12:47:02.36418645 +0000 UTC m=+927.957201794" watchObservedRunningTime="2026-02-16 12:47:02.369719745 +0000 UTC m=+927.962735099" Feb 16 12:47:02 crc kubenswrapper[4799]: E0216 12:47:02.370346 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc\": container with ID starting with 447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc not found: ID does not exist" containerID="447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.370408 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc"} err="failed to get container status \"447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc\": rpc error: code = NotFound desc = could not find container \"447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc\": container with ID starting with 447dc5de8a649e7ef9343caf5e9d9f9053a0230cfb6ac1ec0252594b1c8367cc not found: ID does not exist" Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.400061 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9zqll"] Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.407089 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9zqll"] Feb 16 12:47:02 crc kubenswrapper[4799]: I0216 12:47:02.726021 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qrqgr" Feb 16 12:47:03 crc kubenswrapper[4799]: I0216 12:47:03.530530 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" path="/var/lib/kubelet/pods/7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7/volumes" Feb 16 12:47:11 crc kubenswrapper[4799]: I0216 12:47:11.562077 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:11 crc kubenswrapper[4799]: I0216 12:47:11.563216 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:11 crc kubenswrapper[4799]: I0216 12:47:11.610072 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:11 crc kubenswrapper[4799]: I0216 12:47:11.717713 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pvc2p" Feb 16 12:47:12 crc kubenswrapper[4799]: I0216 12:47:12.708510 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fmgnv" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.297049 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx"] Feb 16 12:47:13 crc kubenswrapper[4799]: E0216 12:47:13.297672 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" containerName="registry-server" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.297707 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" containerName="registry-server" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.297950 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be8d5d1-81b5-4246-9ec3-9e5f67fbe0e7" containerName="registry-server" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.299706 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.303266 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-g6kcp" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.311274 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx"] Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.452789 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw67t\" (UniqueName: \"kubernetes.io/projected/c68693fd-4a9d-4ced-a924-278d18aca18f-kube-api-access-qw67t\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.453193 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-util\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.453250 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-bundle\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.554976 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw67t\" (UniqueName: \"kubernetes.io/projected/c68693fd-4a9d-4ced-a924-278d18aca18f-kube-api-access-qw67t\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.555235 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-bundle\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.555279 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-util\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.556959 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-util\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.556955 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-bundle\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.595294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw67t\" (UniqueName: \"kubernetes.io/projected/c68693fd-4a9d-4ced-a924-278d18aca18f-kube-api-access-qw67t\") pod \"b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.636022 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:13 crc kubenswrapper[4799]: I0216 12:47:13.923677 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx"] Feb 16 12:47:13 crc kubenswrapper[4799]: W0216 12:47:13.931347 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc68693fd_4a9d_4ced_a924_278d18aca18f.slice/crio-cf26c761966af67fa16970dbf4e164d07af3351c0eca91dd6819d5265933c6ed WatchSource:0}: Error finding container cf26c761966af67fa16970dbf4e164d07af3351c0eca91dd6819d5265933c6ed: Status 404 returned error can't find the container with id cf26c761966af67fa16970dbf4e164d07af3351c0eca91dd6819d5265933c6ed Feb 16 12:47:14 crc kubenswrapper[4799]: I0216 12:47:14.714708 4799 generic.go:334] "Generic (PLEG): container finished" podID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerID="158e3be1d8e21765a830c683b20276ede2d41b469fe07863a7185f3ce4b1d311" exitCode=0 Feb 16 12:47:14 crc kubenswrapper[4799]: I0216 12:47:14.714782 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" event={"ID":"c68693fd-4a9d-4ced-a924-278d18aca18f","Type":"ContainerDied","Data":"158e3be1d8e21765a830c683b20276ede2d41b469fe07863a7185f3ce4b1d311"} Feb 16 12:47:14 crc kubenswrapper[4799]: I0216 12:47:14.714822 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" event={"ID":"c68693fd-4a9d-4ced-a924-278d18aca18f","Type":"ContainerStarted","Data":"cf26c761966af67fa16970dbf4e164d07af3351c0eca91dd6819d5265933c6ed"} Feb 16 12:47:15 crc kubenswrapper[4799]: I0216 12:47:15.722834 4799 generic.go:334] "Generic (PLEG): container finished" podID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerID="0a6e5c383ed16ece0ec3f7662f1133a3bae52e187bef0d189a2396f752ae7103" exitCode=0 Feb 16 12:47:15 crc kubenswrapper[4799]: I0216 12:47:15.723509 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" event={"ID":"c68693fd-4a9d-4ced-a924-278d18aca18f","Type":"ContainerDied","Data":"0a6e5c383ed16ece0ec3f7662f1133a3bae52e187bef0d189a2396f752ae7103"} Feb 16 12:47:16 crc kubenswrapper[4799]: I0216 12:47:16.733877 4799 generic.go:334] "Generic (PLEG): container finished" podID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerID="b6c5f629afe6f9c2bf546f4e92e528e3dc96974ca51186c9caf803eb2da02099" exitCode=0 Feb 16 12:47:16 crc kubenswrapper[4799]: I0216 12:47:16.733995 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" event={"ID":"c68693fd-4a9d-4ced-a924-278d18aca18f","Type":"ContainerDied","Data":"b6c5f629afe6f9c2bf546f4e92e528e3dc96974ca51186c9caf803eb2da02099"} Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.035011 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.143832 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-bundle\") pod \"c68693fd-4a9d-4ced-a924-278d18aca18f\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.143962 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-util\") pod \"c68693fd-4a9d-4ced-a924-278d18aca18f\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.144062 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw67t\" (UniqueName: \"kubernetes.io/projected/c68693fd-4a9d-4ced-a924-278d18aca18f-kube-api-access-qw67t\") pod \"c68693fd-4a9d-4ced-a924-278d18aca18f\" (UID: \"c68693fd-4a9d-4ced-a924-278d18aca18f\") " Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.145943 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-bundle" (OuterVolumeSpecName: "bundle") pod "c68693fd-4a9d-4ced-a924-278d18aca18f" (UID: "c68693fd-4a9d-4ced-a924-278d18aca18f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.155816 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68693fd-4a9d-4ced-a924-278d18aca18f-kube-api-access-qw67t" (OuterVolumeSpecName: "kube-api-access-qw67t") pod "c68693fd-4a9d-4ced-a924-278d18aca18f" (UID: "c68693fd-4a9d-4ced-a924-278d18aca18f"). InnerVolumeSpecName "kube-api-access-qw67t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.179066 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-util" (OuterVolumeSpecName: "util") pod "c68693fd-4a9d-4ced-a924-278d18aca18f" (UID: "c68693fd-4a9d-4ced-a924-278d18aca18f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.246701 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw67t\" (UniqueName: \"kubernetes.io/projected/c68693fd-4a9d-4ced-a924-278d18aca18f-kube-api-access-qw67t\") on node \"crc\" DevicePath \"\"" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.246785 4799 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.246809 4799 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c68693fd-4a9d-4ced-a924-278d18aca18f-util\") on node \"crc\" DevicePath \"\"" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.753063 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" event={"ID":"c68693fd-4a9d-4ced-a924-278d18aca18f","Type":"ContainerDied","Data":"cf26c761966af67fa16970dbf4e164d07af3351c0eca91dd6819d5265933c6ed"} Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.753136 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx" Feb 16 12:47:18 crc kubenswrapper[4799]: I0216 12:47:18.753142 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf26c761966af67fa16970dbf4e164d07af3351c0eca91dd6819d5265933c6ed" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.934049 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t"] Feb 16 12:47:25 crc kubenswrapper[4799]: E0216 12:47:25.935216 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="util" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.935231 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="util" Feb 16 12:47:25 crc kubenswrapper[4799]: E0216 12:47:25.935243 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="extract" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.935249 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="extract" Feb 16 12:47:25 crc kubenswrapper[4799]: E0216 12:47:25.935264 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="pull" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.935270 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="pull" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.935400 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68693fd-4a9d-4ced-a924-278d18aca18f" containerName="extract" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.935927 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.938782 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-gmrlr" Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.965345 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t"] Feb 16 12:47:25 crc kubenswrapper[4799]: I0216 12:47:25.983810 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mk4\" (UniqueName: \"kubernetes.io/projected/e414b45d-e5dd-4905-9f69-781ec6e6d824-kube-api-access-l8mk4\") pod \"openstack-operator-controller-init-7678556f8f-7z95t\" (UID: \"e414b45d-e5dd-4905-9f69-781ec6e6d824\") " pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:26 crc kubenswrapper[4799]: I0216 12:47:26.085711 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mk4\" (UniqueName: \"kubernetes.io/projected/e414b45d-e5dd-4905-9f69-781ec6e6d824-kube-api-access-l8mk4\") pod \"openstack-operator-controller-init-7678556f8f-7z95t\" (UID: \"e414b45d-e5dd-4905-9f69-781ec6e6d824\") " pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:26 crc kubenswrapper[4799]: I0216 12:47:26.119380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mk4\" (UniqueName: \"kubernetes.io/projected/e414b45d-e5dd-4905-9f69-781ec6e6d824-kube-api-access-l8mk4\") pod \"openstack-operator-controller-init-7678556f8f-7z95t\" (UID: \"e414b45d-e5dd-4905-9f69-781ec6e6d824\") " pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:26 crc kubenswrapper[4799]: I0216 12:47:26.258872 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:26 crc kubenswrapper[4799]: I0216 12:47:26.570548 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t"] Feb 16 12:47:26 crc kubenswrapper[4799]: I0216 12:47:26.811066 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" event={"ID":"e414b45d-e5dd-4905-9f69-781ec6e6d824","Type":"ContainerStarted","Data":"a3145e7117df16100c9b0e6a4748c341d4964d2cd863babc80643694d440813b"} Feb 16 12:47:31 crc kubenswrapper[4799]: I0216 12:47:31.862817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" event={"ID":"e414b45d-e5dd-4905-9f69-781ec6e6d824","Type":"ContainerStarted","Data":"221e619486be41aadbc80d9790fe9c4953ac06c38d0ac28ac0b59630261327aa"} Feb 16 12:47:31 crc kubenswrapper[4799]: I0216 12:47:31.864196 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:31 crc kubenswrapper[4799]: I0216 12:47:31.920016 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" podStartSLOduration=2.842015586 podStartE2EDuration="6.91997954s" podCreationTimestamp="2026-02-16 12:47:25 +0000 UTC" firstStartedPulling="2026-02-16 12:47:26.576903977 +0000 UTC m=+952.169919311" lastFinishedPulling="2026-02-16 12:47:30.654867931 +0000 UTC m=+956.247883265" observedRunningTime="2026-02-16 12:47:31.910555349 +0000 UTC m=+957.503570723" watchObservedRunningTime="2026-02-16 12:47:31.91997954 +0000 UTC m=+957.512994884" Feb 16 12:47:36 crc kubenswrapper[4799]: I0216 12:47:36.263613 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7678556f8f-7z95t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.252613 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.254502 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.257236 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vf26j" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.269801 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.271313 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.273550 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-j6gbz" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.281869 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.290707 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.312986 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.314104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.319906 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-h5prl" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.331228 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.332279 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.337200 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-46kmt" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.343492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqwqc\" (UniqueName: \"kubernetes.io/projected/b7dcb594-1126-4b75-8f5d-d2b5edc9ccad-kube-api-access-qqwqc\") pod \"cinder-operator-controller-manager-57746b5ff9-zh76r\" (UID: \"b7dcb594-1126-4b75-8f5d-d2b5edc9ccad\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.343568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhwr\" (UniqueName: \"kubernetes.io/projected/e555e0d9-b9d6-4e25-ad40-c6d9c1cae800-kube-api-access-7vhwr\") pod \"barbican-operator-controller-manager-c4b7d6946-lzptd\" (UID: \"e555e0d9-b9d6-4e25-ad40-c6d9c1cae800\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.347015 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.357273 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.369724 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.370904 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.380560 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.381650 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.384584 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-gtgxg" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.386456 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8dm2h" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.394185 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.402429 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.425089 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.426100 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.433716 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.434807 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.438818 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.439066 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-smbhj" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.439166 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7gk2w" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.446939 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.448043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57npd\" (UniqueName: \"kubernetes.io/projected/c8106c68-2300-410d-94fc-5dc71651dba5-kube-api-access-57npd\") pod \"glance-operator-controller-manager-68c6d499cb-z9x44\" (UID: \"c8106c68-2300-410d-94fc-5dc71651dba5\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.448088 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzg6\" (UniqueName: \"kubernetes.io/projected/b286a989-7544-4596-bb1b-f06469aedbdc-kube-api-access-jgzg6\") pod \"heat-operator-controller-manager-9595d6797-cq9hr\" (UID: \"b286a989-7544-4596-bb1b-f06469aedbdc\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.448114 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqwqc\" (UniqueName: \"kubernetes.io/projected/b7dcb594-1126-4b75-8f5d-d2b5edc9ccad-kube-api-access-qqwqc\") pod \"cinder-operator-controller-manager-57746b5ff9-zh76r\" (UID: \"b7dcb594-1126-4b75-8f5d-d2b5edc9ccad\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.448182 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhwr\" (UniqueName: \"kubernetes.io/projected/e555e0d9-b9d6-4e25-ad40-c6d9c1cae800-kube-api-access-7vhwr\") pod \"barbican-operator-controller-manager-c4b7d6946-lzptd\" (UID: \"e555e0d9-b9d6-4e25-ad40-c6d9c1cae800\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.448209 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rch2d\" (UniqueName: \"kubernetes.io/projected/5cc692f7-262b-4ffa-b259-69f665422e8d-kube-api-access-rch2d\") pod \"designate-operator-controller-manager-55cc45767f-ddwg6\" (UID: \"5cc692f7-262b-4ffa-b259-69f665422e8d\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.448227 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4xj\" (UniqueName: \"kubernetes.io/projected/3278a4bc-c2fa-4672-9a31-f53b0e95dbcd-kube-api-access-cv4xj\") pod \"horizon-operator-controller-manager-54fb488b88-m6t96\" (UID: \"3278a4bc-c2fa-4672-9a31-f53b0e95dbcd\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.452777 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.481835 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.483019 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.488465 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-b85bw" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.493782 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqwqc\" (UniqueName: \"kubernetes.io/projected/b7dcb594-1126-4b75-8f5d-d2b5edc9ccad-kube-api-access-qqwqc\") pod \"cinder-operator-controller-manager-57746b5ff9-zh76r\" (UID: \"b7dcb594-1126-4b75-8f5d-d2b5edc9ccad\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.493860 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.494851 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.503298 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-csp7b" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.509294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhwr\" (UniqueName: \"kubernetes.io/projected/e555e0d9-b9d6-4e25-ad40-c6d9c1cae800-kube-api-access-7vhwr\") pod \"barbican-operator-controller-manager-c4b7d6946-lzptd\" (UID: \"e555e0d9-b9d6-4e25-ad40-c6d9c1cae800\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.527035 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.539250 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.544177 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.546809 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.547470 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kdqwf" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550692 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpnz\" (UniqueName: \"kubernetes.io/projected/f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf-kube-api-access-7fpnz\") pod \"ironic-operator-controller-manager-6494cdbf8f-lwlqz\" (UID: \"f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chr67\" (UniqueName: \"kubernetes.io/projected/ae60b108-5e33-408f-a861-8e2e1e9ab643-kube-api-access-chr67\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550765 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550794 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rch2d\" (UniqueName: \"kubernetes.io/projected/5cc692f7-262b-4ffa-b259-69f665422e8d-kube-api-access-rch2d\") pod \"designate-operator-controller-manager-55cc45767f-ddwg6\" (UID: \"5cc692f7-262b-4ffa-b259-69f665422e8d\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550816 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4xj\" (UniqueName: \"kubernetes.io/projected/3278a4bc-c2fa-4672-9a31-f53b0e95dbcd-kube-api-access-cv4xj\") pod \"horizon-operator-controller-manager-54fb488b88-m6t96\" (UID: \"3278a4bc-c2fa-4672-9a31-f53b0e95dbcd\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550846 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwtn\" (UniqueName: \"kubernetes.io/projected/9ec15942-7ca3-444c-a096-a23c21b701ed-kube-api-access-slwtn\") pod \"keystone-operator-controller-manager-6c78d668d5-686fx\" (UID: \"9ec15942-7ca3-444c-a096-a23c21b701ed\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57npd\" (UniqueName: \"kubernetes.io/projected/c8106c68-2300-410d-94fc-5dc71651dba5-kube-api-access-57npd\") pod \"glance-operator-controller-manager-68c6d499cb-z9x44\" (UID: \"c8106c68-2300-410d-94fc-5dc71651dba5\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550918 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrghd\" (UniqueName: \"kubernetes.io/projected/fb144fe6-dbb4-492a-acb1-b642ea0a20f0-kube-api-access-jrghd\") pod \"manila-operator-controller-manager-96fff9cb8-jb5fm\" (UID: \"fb144fe6-dbb4-492a-acb1-b642ea0a20f0\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.550946 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzg6\" (UniqueName: \"kubernetes.io/projected/b286a989-7544-4596-bb1b-f06469aedbdc-kube-api-access-jgzg6\") pod \"heat-operator-controller-manager-9595d6797-cq9hr\" (UID: \"b286a989-7544-4596-bb1b-f06469aedbdc\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.566734 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.572533 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.573547 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.576538 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-k7wsn" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.591041 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.592492 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.593628 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4xj\" (UniqueName: \"kubernetes.io/projected/3278a4bc-c2fa-4672-9a31-f53b0e95dbcd-kube-api-access-cv4xj\") pod \"horizon-operator-controller-manager-54fb488b88-m6t96\" (UID: \"3278a4bc-c2fa-4672-9a31-f53b0e95dbcd\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.593719 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57npd\" (UniqueName: \"kubernetes.io/projected/c8106c68-2300-410d-94fc-5dc71651dba5-kube-api-access-57npd\") pod \"glance-operator-controller-manager-68c6d499cb-z9x44\" (UID: \"c8106c68-2300-410d-94fc-5dc71651dba5\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.598044 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.600037 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rch2d\" (UniqueName: \"kubernetes.io/projected/5cc692f7-262b-4ffa-b259-69f665422e8d-kube-api-access-rch2d\") pod \"designate-operator-controller-manager-55cc45767f-ddwg6\" (UID: \"5cc692f7-262b-4ffa-b259-69f665422e8d\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.606440 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzg6\" (UniqueName: \"kubernetes.io/projected/b286a989-7544-4596-bb1b-f06469aedbdc-kube-api-access-jgzg6\") pod \"heat-operator-controller-manager-9595d6797-cq9hr\" (UID: \"b286a989-7544-4596-bb1b-f06469aedbdc\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.615592 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.616634 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.621322 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.622646 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.628106 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-99s8q" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.631595 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-j6wzt" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.635682 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.642058 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.692898 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsbmf\" (UniqueName: \"kubernetes.io/projected/ec674ea8-aa42-4917-906f-9a9b098ba2c0-kube-api-access-jsbmf\") pod \"octavia-operator-controller-manager-745bbbd77b-4g8xm\" (UID: \"ec674ea8-aa42-4917-906f-9a9b098ba2c0\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693053 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fpnz\" (UniqueName: \"kubernetes.io/projected/f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf-kube-api-access-7fpnz\") pod \"ironic-operator-controller-manager-6494cdbf8f-lwlqz\" (UID: \"f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693084 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc959\" (UniqueName: \"kubernetes.io/projected/1c684efb-e592-4c17-a896-897b466cd387-kube-api-access-kc959\") pod \"mariadb-operator-controller-manager-66997756f6-dqssm\" (UID: \"1c684efb-e592-4c17-a896-897b466cd387\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693115 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chr67\" (UniqueName: \"kubernetes.io/projected/ae60b108-5e33-408f-a861-8e2e1e9ab643-kube-api-access-chr67\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693169 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693194 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7fz\" (UniqueName: \"kubernetes.io/projected/8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379-kube-api-access-mw7fz\") pod \"neutron-operator-controller-manager-54967dbbdf-g4fg8\" (UID: \"8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693268 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwtn\" (UniqueName: \"kubernetes.io/projected/9ec15942-7ca3-444c-a096-a23c21b701ed-kube-api-access-slwtn\") pod \"keystone-operator-controller-manager-6c78d668d5-686fx\" (UID: \"9ec15942-7ca3-444c-a096-a23c21b701ed\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693390 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrghd\" (UniqueName: \"kubernetes.io/projected/fb144fe6-dbb4-492a-acb1-b642ea0a20f0-kube-api-access-jrghd\") pod \"manila-operator-controller-manager-96fff9cb8-jb5fm\" (UID: \"fb144fe6-dbb4-492a-acb1-b642ea0a20f0\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.693412 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpb5j\" (UniqueName: \"kubernetes.io/projected/17536931-400e-4131-8992-a30c2ebda385-kube-api-access-bpb5j\") pod \"nova-operator-controller-manager-5ddd85db87-8r6qg\" (UID: \"17536931-400e-4131-8992-a30c2ebda385\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.726669 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:47:56 crc kubenswrapper[4799]: E0216 12:47:56.726827 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 12:47:56 crc kubenswrapper[4799]: E0216 12:47:56.726972 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert podName:ae60b108-5e33-408f-a861-8e2e1e9ab643 nodeName:}" failed. No retries permitted until 2026-02-16 12:47:57.22693458 +0000 UTC m=+982.819949914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert") pod "infra-operator-controller-manager-66d6b5f488-gt66t" (UID: "ae60b108-5e33-408f-a861-8e2e1e9ab643") : secret "infra-operator-webhook-server-cert" not found Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.744009 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.759864 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.760183 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.760262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwtn\" (UniqueName: \"kubernetes.io/projected/9ec15942-7ca3-444c-a096-a23c21b701ed-kube-api-access-slwtn\") pod \"keystone-operator-controller-manager-6c78d668d5-686fx\" (UID: \"9ec15942-7ca3-444c-a096-a23c21b701ed\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.762416 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chr67\" (UniqueName: \"kubernetes.io/projected/ae60b108-5e33-408f-a861-8e2e1e9ab643-kube-api-access-chr67\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.763256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fpnz\" (UniqueName: \"kubernetes.io/projected/f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf-kube-api-access-7fpnz\") pod \"ironic-operator-controller-manager-6494cdbf8f-lwlqz\" (UID: \"f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.766845 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrghd\" (UniqueName: \"kubernetes.io/projected/fb144fe6-dbb4-492a-acb1-b642ea0a20f0-kube-api-access-jrghd\") pod \"manila-operator-controller-manager-96fff9cb8-jb5fm\" (UID: \"fb144fe6-dbb4-492a-acb1-b642ea0a20f0\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.782398 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.783565 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.795237 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.796114 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpb5j\" (UniqueName: \"kubernetes.io/projected/17536931-400e-4131-8992-a30c2ebda385-kube-api-access-bpb5j\") pod \"nova-operator-controller-manager-5ddd85db87-8r6qg\" (UID: \"17536931-400e-4131-8992-a30c2ebda385\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.796211 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsbmf\" (UniqueName: \"kubernetes.io/projected/ec674ea8-aa42-4917-906f-9a9b098ba2c0-kube-api-access-jsbmf\") pod \"octavia-operator-controller-manager-745bbbd77b-4g8xm\" (UID: \"ec674ea8-aa42-4917-906f-9a9b098ba2c0\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.796259 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc959\" (UniqueName: \"kubernetes.io/projected/1c684efb-e592-4c17-a896-897b466cd387-kube-api-access-kc959\") pod \"mariadb-operator-controller-manager-66997756f6-dqssm\" (UID: \"1c684efb-e592-4c17-a896-897b466cd387\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.796452 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7fz\" (UniqueName: \"kubernetes.io/projected/8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379-kube-api-access-mw7fz\") pod \"neutron-operator-controller-manager-54967dbbdf-g4fg8\" (UID: \"8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.796585 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.799531 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fc4tn" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.816480 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.816721 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-w7grh" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.823411 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7fz\" (UniqueName: \"kubernetes.io/projected/8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379-kube-api-access-mw7fz\") pod \"neutron-operator-controller-manager-54967dbbdf-g4fg8\" (UID: \"8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.829202 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpb5j\" (UniqueName: \"kubernetes.io/projected/17536931-400e-4131-8992-a30c2ebda385-kube-api-access-bpb5j\") pod \"nova-operator-controller-manager-5ddd85db87-8r6qg\" (UID: \"17536931-400e-4131-8992-a30c2ebda385\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.832282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsbmf\" (UniqueName: \"kubernetes.io/projected/ec674ea8-aa42-4917-906f-9a9b098ba2c0-kube-api-access-jsbmf\") pod \"octavia-operator-controller-manager-745bbbd77b-4g8xm\" (UID: \"ec674ea8-aa42-4917-906f-9a9b098ba2c0\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.835652 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.837298 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc959\" (UniqueName: \"kubernetes.io/projected/1c684efb-e592-4c17-a896-897b466cd387-kube-api-access-kc959\") pod \"mariadb-operator-controller-manager-66997756f6-dqssm\" (UID: \"1c684efb-e592-4c17-a896-897b466cd387\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.851047 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.862340 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.863412 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.865665 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.871469 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2nqvn" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.878299 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.886881 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.888728 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.895851 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.899339 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxkt\" (UniqueName: \"kubernetes.io/projected/1328d15a-4b40-4db9-b0f8-0c8490e623b9-kube-api-access-2mxkt\") pod \"placement-operator-controller-manager-57bd55f9b7-rv7cl\" (UID: \"1328d15a-4b40-4db9-b0f8-0c8490e623b9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.899378 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25mb\" (UniqueName: \"kubernetes.io/projected/12dbbffb-b10a-4b02-9698-fa66c5ff9451-kube-api-access-h25mb\") pod \"ovn-operator-controller-manager-85c99d655-5trbx\" (UID: \"12dbbffb-b10a-4b02-9698-fa66c5ff9451\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.899401 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.899455 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6m6b\" (UniqueName: \"kubernetes.io/projected/3469cc9e-8b93-4c52-957a-78b91019767d-kube-api-access-x6m6b\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.901927 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.904089 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jqvpc" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.904307 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.910278 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z67jz" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.919720 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.929947 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.935756 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.946214 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.947373 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.950649 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-g8rfv" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.958582 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.983768 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf"] Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.984892 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.990036 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7vfc9" Feb 16 12:47:56 crc kubenswrapper[4799]: I0216 12:47:56.990287 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.001890 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxfp\" (UniqueName: \"kubernetes.io/projected/bd478887-eb50-4e9c-8933-7b513c323cac-kube-api-access-2wxfp\") pod \"swift-operator-controller-manager-79558bbfbf-6fhfw\" (UID: \"bd478887-eb50-4e9c-8933-7b513c323cac\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.001947 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4w6t\" (UniqueName: \"kubernetes.io/projected/7333b2fd-d81d-4daa-965a-3d5fefca8863-kube-api-access-z4w6t\") pod \"telemetry-operator-controller-manager-56dc67d744-fhf99\" (UID: \"7333b2fd-d81d-4daa-965a-3d5fefca8863\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.002016 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxkt\" (UniqueName: \"kubernetes.io/projected/1328d15a-4b40-4db9-b0f8-0c8490e623b9-kube-api-access-2mxkt\") pod \"placement-operator-controller-manager-57bd55f9b7-rv7cl\" (UID: \"1328d15a-4b40-4db9-b0f8-0c8490e623b9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.002036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25mb\" (UniqueName: \"kubernetes.io/projected/12dbbffb-b10a-4b02-9698-fa66c5ff9451-kube-api-access-h25mb\") pod \"ovn-operator-controller-manager-85c99d655-5trbx\" (UID: \"12dbbffb-b10a-4b02-9698-fa66c5ff9451\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.002073 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.002095 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgc2w\" (UniqueName: \"kubernetes.io/projected/12e59839-c074-42ea-84e6-1be9b5a261ad-kube-api-access-dgc2w\") pod \"test-operator-controller-manager-8467ccb4c8-lz8sd\" (UID: \"12e59839-c074-42ea-84e6-1be9b5a261ad\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.002143 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6m6b\" (UniqueName: \"kubernetes.io/projected/3469cc9e-8b93-4c52-957a-78b91019767d-kube-api-access-x6m6b\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.002822 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.002870 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert podName:3469cc9e-8b93-4c52-957a-78b91019767d nodeName:}" failed. No retries permitted until 2026-02-16 12:47:57.502857218 +0000 UTC m=+983.095872552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" (UID: "3469cc9e-8b93-4c52-957a-78b91019767d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.018388 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.029210 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.030762 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.036381 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.036680 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.036810 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jjjt8" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.039496 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.050585 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.051728 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25mb\" (UniqueName: \"kubernetes.io/projected/12dbbffb-b10a-4b02-9698-fa66c5ff9451-kube-api-access-h25mb\") pod \"ovn-operator-controller-manager-85c99d655-5trbx\" (UID: \"12dbbffb-b10a-4b02-9698-fa66c5ff9451\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.058001 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6m6b\" (UniqueName: \"kubernetes.io/projected/3469cc9e-8b93-4c52-957a-78b91019767d-kube-api-access-x6m6b\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.058503 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxkt\" (UniqueName: \"kubernetes.io/projected/1328d15a-4b40-4db9-b0f8-0c8490e623b9-kube-api-access-2mxkt\") pod \"placement-operator-controller-manager-57bd55f9b7-rv7cl\" (UID: \"1328d15a-4b40-4db9-b0f8-0c8490e623b9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.066494 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.076530 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.096437 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.105899 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxfp\" (UniqueName: \"kubernetes.io/projected/bd478887-eb50-4e9c-8933-7b513c323cac-kube-api-access-2wxfp\") pod \"swift-operator-controller-manager-79558bbfbf-6fhfw\" (UID: \"bd478887-eb50-4e9c-8933-7b513c323cac\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.105967 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4w6t\" (UniqueName: \"kubernetes.io/projected/7333b2fd-d81d-4daa-965a-3d5fefca8863-kube-api-access-z4w6t\") pod \"telemetry-operator-controller-manager-56dc67d744-fhf99\" (UID: \"7333b2fd-d81d-4daa-965a-3d5fefca8863\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.106013 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjv5k\" (UniqueName: \"kubernetes.io/projected/1e501664-2258-45c7-8934-7f953c7fc799-kube-api-access-hjv5k\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.106051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsz27\" (UniqueName: \"kubernetes.io/projected/0935892b-89a7-4b63-8012-dbe285c5a2f3-kube-api-access-hsz27\") pod \"watcher-operator-controller-manager-7f65d44ccf-htwqf\" (UID: \"0935892b-89a7-4b63-8012-dbe285c5a2f3\") " pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.106093 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.106162 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgc2w\" (UniqueName: \"kubernetes.io/projected/12e59839-c074-42ea-84e6-1be9b5a261ad-kube-api-access-dgc2w\") pod \"test-operator-controller-manager-8467ccb4c8-lz8sd\" (UID: \"12e59839-c074-42ea-84e6-1be9b5a261ad\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.106194 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.139645 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.143138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgc2w\" (UniqueName: \"kubernetes.io/projected/12e59839-c074-42ea-84e6-1be9b5a261ad-kube-api-access-dgc2w\") pod \"test-operator-controller-manager-8467ccb4c8-lz8sd\" (UID: \"12e59839-c074-42ea-84e6-1be9b5a261ad\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.150920 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4w6t\" (UniqueName: \"kubernetes.io/projected/7333b2fd-d81d-4daa-965a-3d5fefca8863-kube-api-access-z4w6t\") pod \"telemetry-operator-controller-manager-56dc67d744-fhf99\" (UID: \"7333b2fd-d81d-4daa-965a-3d5fefca8863\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.213837 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxfp\" (UniqueName: \"kubernetes.io/projected/bd478887-eb50-4e9c-8933-7b513c323cac-kube-api-access-2wxfp\") pod \"swift-operator-controller-manager-79558bbfbf-6fhfw\" (UID: \"bd478887-eb50-4e9c-8933-7b513c323cac\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.216704 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjv5k\" (UniqueName: \"kubernetes.io/projected/1e501664-2258-45c7-8934-7f953c7fc799-kube-api-access-hjv5k\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.218927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsz27\" (UniqueName: \"kubernetes.io/projected/0935892b-89a7-4b63-8012-dbe285c5a2f3-kube-api-access-hsz27\") pod \"watcher-operator-controller-manager-7f65d44ccf-htwqf\" (UID: \"0935892b-89a7-4b63-8012-dbe285c5a2f3\") " pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.219647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.222679 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.222800 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:47:57.722774517 +0000 UTC m=+983.315789851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "metrics-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.220118 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.224797 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.224918 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:47:57.72489211 +0000 UTC m=+983.317907434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.251140 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsz27\" (UniqueName: \"kubernetes.io/projected/0935892b-89a7-4b63-8012-dbe285c5a2f3-kube-api-access-hsz27\") pod \"watcher-operator-controller-manager-7f65d44ccf-htwqf\" (UID: \"0935892b-89a7-4b63-8012-dbe285c5a2f3\") " pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.273053 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.282034 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjv5k\" (UniqueName: \"kubernetes.io/projected/1e501664-2258-45c7-8934-7f953c7fc799-kube-api-access-hjv5k\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.292421 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.325735 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.327678 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.327925 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.327987 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert podName:ae60b108-5e33-408f-a861-8e2e1e9ab643 nodeName:}" failed. No retries permitted until 2026-02-16 12:47:58.327968584 +0000 UTC m=+983.920983918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert") pod "infra-operator-controller-manager-66d6b5f488-gt66t" (UID: "ae60b108-5e33-408f-a861-8e2e1e9ab643") : secret "infra-operator-webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.331651 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.342143 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6hght" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.443652 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.469432 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.470604 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.555749 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.555884 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pszg7\" (UniqueName: \"kubernetes.io/projected/692956be-1d06-489c-9a30-0f7e4e144caa-kube-api-access-pszg7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hrpbx\" (UID: \"692956be-1d06-489c-9a30-0f7e4e144caa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.556782 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.556837 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert podName:3469cc9e-8b93-4c52-957a-78b91019767d nodeName:}" failed. No retries permitted until 2026-02-16 12:47:58.556819029 +0000 UTC m=+984.149834363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" (UID: "3469cc9e-8b93-4c52-957a-78b91019767d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.657629 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pszg7\" (UniqueName: \"kubernetes.io/projected/692956be-1d06-489c-9a30-0f7e4e144caa-kube-api-access-pszg7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hrpbx\" (UID: \"692956be-1d06-489c-9a30-0f7e4e144caa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.689082 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pszg7\" (UniqueName: \"kubernetes.io/projected/692956be-1d06-489c-9a30-0f7e4e144caa-kube-api-access-pszg7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hrpbx\" (UID: \"692956be-1d06-489c-9a30-0f7e4e144caa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.723341 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.745362 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.762045 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.762236 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.762301 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.762413 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:47:58.762380729 +0000 UTC m=+984.355396243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "webhook-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.762442 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: E0216 12:47:57.762516 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:47:58.762493963 +0000 UTC m=+984.355509297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "metrics-server-cert" not found Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.904902 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.936840 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96"] Feb 16 12:47:57 crc kubenswrapper[4799]: I0216 12:47:57.978456 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" Feb 16 12:47:58 crc kubenswrapper[4799]: W0216 12:47:58.016769 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cc692f7_262b_4ffa_b259_69f665422e8d.slice/crio-9038345b008698bddd1795381441bee132cb3b2812eca594de6a45486d65af17 WatchSource:0}: Error finding container 9038345b008698bddd1795381441bee132cb3b2812eca594de6a45486d65af17: Status 404 returned error can't find the container with id 9038345b008698bddd1795381441bee132cb3b2812eca594de6a45486d65af17 Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.079974 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" event={"ID":"b7dcb594-1126-4b75-8f5d-d2b5edc9ccad","Type":"ContainerStarted","Data":"5eba20ba54cdcdbdddbf14f2886a275e9bedfe56aae210592d3c0d2dd95ee0f7"} Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.091145 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" event={"ID":"5cc692f7-262b-4ffa-b259-69f665422e8d","Type":"ContainerStarted","Data":"9038345b008698bddd1795381441bee132cb3b2812eca594de6a45486d65af17"} Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.093388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" event={"ID":"3278a4bc-c2fa-4672-9a31-f53b0e95dbcd","Type":"ContainerStarted","Data":"5da19def697ffc8cbbd0f926552710774eef4a4463410b9f39de8fb32c051637"} Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.094637 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" event={"ID":"e555e0d9-b9d6-4e25-ad40-c6d9c1cae800","Type":"ContainerStarted","Data":"c03f168155c5d7b6d60717c5231c46d9100e78f7aa681fed5575b23a1ab486a8"} Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.380585 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.380845 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.380939 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert podName:ae60b108-5e33-408f-a861-8e2e1e9ab643 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:00.380914775 +0000 UTC m=+985.973930279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert") pod "infra-operator-controller-manager-66d6b5f488-gt66t" (UID: "ae60b108-5e33-408f-a861-8e2e1e9ab643") : secret "infra-operator-webhook-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.406800 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.440434 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8"] Feb 16 12:47:58 crc kubenswrapper[4799]: W0216 12:47:58.443076 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8106c68_2300_410d_94fc_5dc71651dba5.slice/crio-9d8b47861d369f502b936c186dfcdec8ad20fba5a99f66d052a3d94162fd31df WatchSource:0}: Error finding container 9d8b47861d369f502b936c186dfcdec8ad20fba5a99f66d052a3d94162fd31df: Status 404 returned error can't find the container with id 9d8b47861d369f502b936c186dfcdec8ad20fba5a99f66d052a3d94162fd31df Feb 16 12:47:58 crc kubenswrapper[4799]: W0216 12:47:58.446368 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cdd0bfb_b4c4_4c37_9d3b_37b4f1607379.slice/crio-c6d0b961a0a6244a65b0bcc1faeaf2445a2c1f900ed16f04ff6f1f791e6b79d5 WatchSource:0}: Error finding container c6d0b961a0a6244a65b0bcc1faeaf2445a2c1f900ed16f04ff6f1f791e6b79d5: Status 404 returned error can't find the container with id c6d0b961a0a6244a65b0bcc1faeaf2445a2c1f900ed16f04ff6f1f791e6b79d5 Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.450373 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.585731 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.586011 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.586103 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert podName:3469cc9e-8b93-4c52-957a-78b91019767d nodeName:}" failed. No retries permitted until 2026-02-16 12:48:00.586084283 +0000 UTC m=+986.179099617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" (UID: "3469cc9e-8b93-4c52-957a-78b91019767d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.790156 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.790378 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.790429 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.790487 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:00.790472139 +0000 UTC m=+986.383487463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "metrics-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.790488 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.790529 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:00.79051767 +0000 UTC m=+986.383533004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "webhook-server-cert" not found Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.815676 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.842211 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd"] Feb 16 12:47:58 crc kubenswrapper[4799]: W0216 12:47:58.873827 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec15942_7ca3_444c_a096_a23c21b701ed.slice/crio-0ff0d7fdfb1f47fdefcd45bce7c1f74c03edb81f14b2f185616380288b6da91f WatchSource:0}: Error finding container 0ff0d7fdfb1f47fdefcd45bce7c1f74c03edb81f14b2f185616380288b6da91f: Status 404 returned error can't find the container with id 0ff0d7fdfb1f47fdefcd45bce7c1f74c03edb81f14b2f185616380288b6da91f Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.893936 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.907522 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.917497 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.925970 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.943856 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr"] Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.969312 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg"] Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.971342 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:afb68925f208ca401020ca8b7812de075a77dafe3dc30fae5c095dcbe5acbc8a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgzg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-9595d6797-cq9hr_openstack-operators(b286a989-7544-4596-bb1b-f06469aedbdc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.971873 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h25mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-85c99d655-5trbx_openstack-operators(12dbbffb-b10a-4b02-9698-fa66c5ff9451): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.973610 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" podUID="12dbbffb-b10a-4b02-9698-fa66c5ff9451" Feb 16 12:47:58 crc kubenswrapper[4799]: E0216 12:47:58.973680 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" podUID="b286a989-7544-4596-bb1b-f06469aedbdc" Feb 16 12:47:58 crc kubenswrapper[4799]: I0216 12:47:58.981476 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw"] Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.001729 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx"] Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.058323 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf"] Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.071049 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx"] Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.077639 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99"] Feb 16 12:47:59 crc kubenswrapper[4799]: W0216 12:47:59.083854 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod692956be_1d06_489c_9a30_0f7e4e144caa.slice/crio-abeeb798e3b97a83e07ab4f48064fa912aabf102053052efd1171eeb2b2297bb WatchSource:0}: Error finding container abeeb798e3b97a83e07ab4f48064fa912aabf102053052efd1171eeb2b2297bb: Status 404 returned error can't find the container with id abeeb798e3b97a83e07ab4f48064fa912aabf102053052efd1171eeb2b2297bb Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.090921 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pszg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hrpbx_openstack-operators(692956be-1d06-489c-9a30-0f7e4e144caa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.092240 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" podUID="692956be-1d06-489c-9a30-0f7e4e144caa" Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.100025 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4w6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-fhf99_openstack-operators(7333b2fd-d81d-4daa-965a-3d5fefca8863): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.102456 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" podUID="7333b2fd-d81d-4daa-965a-3d5fefca8863" Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.107107 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" event={"ID":"ec674ea8-aa42-4917-906f-9a9b098ba2c0","Type":"ContainerStarted","Data":"d05c5abd23edee7391d80f1cc6904d7aba2e639f69d10e0c34650adb395c2777"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.109660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" event={"ID":"7333b2fd-d81d-4daa-965a-3d5fefca8863","Type":"ContainerStarted","Data":"dc68bcfc24bdf1b0efc8dc5d79d3b6428a9331b585ed2438dfbd24218292cbb5"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.111666 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" event={"ID":"0935892b-89a7-4b63-8012-dbe285c5a2f3","Type":"ContainerStarted","Data":"29fbb9dc211c020ef92d4ca62ce2add0555e74d2dd5f6fe177bb56b48b0cec13"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.113528 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" event={"ID":"bd478887-eb50-4e9c-8933-7b513c323cac","Type":"ContainerStarted","Data":"ae755d5c8a60e701cbf19e0890a802c147f13d095cbb25fbd1536aeab6244548"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.116849 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" event={"ID":"f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf","Type":"ContainerStarted","Data":"16c7c33f5b000c254bcba4360d0451ba5fc62f6220e402e27045a5892bad3910"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.118738 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" event={"ID":"9ec15942-7ca3-444c-a096-a23c21b701ed","Type":"ContainerStarted","Data":"0ff0d7fdfb1f47fdefcd45bce7c1f74c03edb81f14b2f185616380288b6da91f"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.120458 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" event={"ID":"1c684efb-e592-4c17-a896-897b466cd387","Type":"ContainerStarted","Data":"23b69911f0fce8ac16e5699076bb79a07d3322d36fdc28e80f3d311a0c83d55c"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.123550 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" event={"ID":"12dbbffb-b10a-4b02-9698-fa66c5ff9451","Type":"ContainerStarted","Data":"cf1a7ef5a35697f8def591e67a5487cc912262fb50f422e268025ba1a1bee4fb"} Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.128723 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" podUID="12dbbffb-b10a-4b02-9698-fa66c5ff9451" Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.129868 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" event={"ID":"692956be-1d06-489c-9a30-0f7e4e144caa","Type":"ContainerStarted","Data":"abeeb798e3b97a83e07ab4f48064fa912aabf102053052efd1171eeb2b2297bb"} Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.132100 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" podUID="7333b2fd-d81d-4daa-965a-3d5fefca8863" Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.132240 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" podUID="692956be-1d06-489c-9a30-0f7e4e144caa" Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.132697 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" event={"ID":"fb144fe6-dbb4-492a-acb1-b642ea0a20f0","Type":"ContainerStarted","Data":"a5a007dab0e96ca586cc9c7f847f66e743aeee3a993a693e69224ad2122fcba7"} Feb 16 12:47:59 crc kubenswrapper[4799]: E0216 12:47:59.161846 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:afb68925f208ca401020ca8b7812de075a77dafe3dc30fae5c095dcbe5acbc8a\\\"\"" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" podUID="b286a989-7544-4596-bb1b-f06469aedbdc" Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.169279 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" event={"ID":"b286a989-7544-4596-bb1b-f06469aedbdc","Type":"ContainerStarted","Data":"e0391802e670fae52572e5608c52610cd026efb59ec7570dfbbd2d237d9b46fb"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.169362 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" event={"ID":"17536931-400e-4131-8992-a30c2ebda385","Type":"ContainerStarted","Data":"9820e699df9f76770adb7243da8f5c38cb6803b51c66444ace60685c1ec7d172"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.170667 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" event={"ID":"1328d15a-4b40-4db9-b0f8-0c8490e623b9","Type":"ContainerStarted","Data":"f5bf72b56736ad9a17365bb4ab69ec0baa69f9936cbe27010645bc3ec46fbf46"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.181519 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" event={"ID":"8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379","Type":"ContainerStarted","Data":"c6d0b961a0a6244a65b0bcc1faeaf2445a2c1f900ed16f04ff6f1f791e6b79d5"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.187245 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" event={"ID":"c8106c68-2300-410d-94fc-5dc71651dba5","Type":"ContainerStarted","Data":"9d8b47861d369f502b936c186dfcdec8ad20fba5a99f66d052a3d94162fd31df"} Feb 16 12:47:59 crc kubenswrapper[4799]: I0216 12:47:59.197713 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" event={"ID":"12e59839-c074-42ea-84e6-1be9b5a261ad","Type":"ContainerStarted","Data":"c65d84f25649976f0f79c0992c1147486a9181b02281709166eb55e6f327652d"} Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.209994 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" podUID="7333b2fd-d81d-4daa-965a-3d5fefca8863" Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.212061 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:afb68925f208ca401020ca8b7812de075a77dafe3dc30fae5c095dcbe5acbc8a\\\"\"" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" podUID="b286a989-7544-4596-bb1b-f06469aedbdc" Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.212133 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" podUID="692956be-1d06-489c-9a30-0f7e4e144caa" Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.223793 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" podUID="12dbbffb-b10a-4b02-9698-fa66c5ff9451" Feb 16 12:48:00 crc kubenswrapper[4799]: I0216 12:48:00.441308 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.441529 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.441582 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert podName:ae60b108-5e33-408f-a861-8e2e1e9ab643 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:04.441568529 +0000 UTC m=+990.034583863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert") pod "infra-operator-controller-manager-66d6b5f488-gt66t" (UID: "ae60b108-5e33-408f-a861-8e2e1e9ab643") : secret "infra-operator-webhook-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: I0216 12:48:00.645551 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.645840 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.645907 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert podName:3469cc9e-8b93-4c52-957a-78b91019767d nodeName:}" failed. No retries permitted until 2026-02-16 12:48:04.645887352 +0000 UTC m=+990.238902686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" (UID: "3469cc9e-8b93-4c52-957a-78b91019767d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: I0216 12:48:00.848747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:00 crc kubenswrapper[4799]: I0216 12:48:00.848908 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.848977 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.849083 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.849109 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:04.849076521 +0000 UTC m=+990.442091855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "metrics-server-cert" not found Feb 16 12:48:00 crc kubenswrapper[4799]: E0216 12:48:00.849166 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:04.849146113 +0000 UTC m=+990.442161447 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "webhook-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: I0216 12:48:04.524796 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.525019 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.525456 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert podName:ae60b108-5e33-408f-a861-8e2e1e9ab643 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:12.525432919 +0000 UTC m=+998.118448263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert") pod "infra-operator-controller-manager-66d6b5f488-gt66t" (UID: "ae60b108-5e33-408f-a861-8e2e1e9ab643") : secret "infra-operator-webhook-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: I0216 12:48:04.727823 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.728066 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.728152 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert podName:3469cc9e-8b93-4c52-957a-78b91019767d nodeName:}" failed. No retries permitted until 2026-02-16 12:48:12.728115133 +0000 UTC m=+998.321130467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" (UID: "3469cc9e-8b93-4c52-957a-78b91019767d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: I0216 12:48:04.931888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:04 crc kubenswrapper[4799]: I0216 12:48:04.932281 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.932330 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.932585 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:12.9325502 +0000 UTC m=+998.525565554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "metrics-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.932432 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 12:48:04 crc kubenswrapper[4799]: E0216 12:48:04.933088 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:12.933075386 +0000 UTC m=+998.526090730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "webhook-server-cert" not found Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.299613 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1323d6f8e365f562bb4c1d5dcacd8aa6e2679ff9d963a73bcfd9556baf97a1dd" Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.300588 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1323d6f8e365f562bb4c1d5dcacd8aa6e2679ff9d963a73bcfd9556baf97a1dd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57npd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68c6d499cb-z9x44_openstack-operators(c8106c68-2300-410d-94fc-5dc71651dba5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.301779 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" podUID="c8106c68-2300-410d-94fc-5dc71651dba5" Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.367733 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1323d6f8e365f562bb4c1d5dcacd8aa6e2679ff9d963a73bcfd9556baf97a1dd\\\"\"" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" podUID="c8106c68-2300-410d-94fc-5dc71651dba5" Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.905514 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee" Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.905802 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrghd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-96fff9cb8-jb5fm_openstack-operators(fb144fe6-dbb4-492a-acb1-b642ea0a20f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:11 crc kubenswrapper[4799]: E0216 12:48:11.907025 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" podUID="fb144fe6-dbb4-492a-acb1-b642ea0a20f0" Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.373649 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee\\\"\"" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" podUID="fb144fe6-dbb4-492a-acb1-b642ea0a20f0" Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.507636 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:5007f87a2869468db06d6257c17e389b587a095a087466c69c0c92328e699546" Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.507876 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:5007f87a2869468db06d6257c17e389b587a095a087466c69c0c92328e699546,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rch2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-55cc45767f-ddwg6_openstack-operators(5cc692f7-262b-4ffa-b259-69f665422e8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.509273 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" podUID="5cc692f7-262b-4ffa-b259-69f665422e8d" Feb 16 12:48:12 crc kubenswrapper[4799]: I0216 12:48:12.595253 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.595551 4799 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.595635 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert podName:ae60b108-5e33-408f-a861-8e2e1e9ab643 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:28.59560807 +0000 UTC m=+1014.188623414 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert") pod "infra-operator-controller-manager-66d6b5f488-gt66t" (UID: "ae60b108-5e33-408f-a861-8e2e1e9ab643") : secret "infra-operator-webhook-server-cert" not found Feb 16 12:48:12 crc kubenswrapper[4799]: I0216 12:48:12.797793 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.798063 4799 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:48:12 crc kubenswrapper[4799]: E0216 12:48:12.798146 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert podName:3469cc9e-8b93-4c52-957a-78b91019767d nodeName:}" failed. No retries permitted until 2026-02-16 12:48:28.798107658 +0000 UTC m=+1014.391122992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" (UID: "3469cc9e-8b93-4c52-957a-78b91019767d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 12:48:13 crc kubenswrapper[4799]: I0216 12:48:13.001218 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:13 crc kubenswrapper[4799]: I0216 12:48:13.001330 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.001526 4799 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.001559 4799 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.001614 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:29.001583876 +0000 UTC m=+1014.594599210 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "webhook-server-cert" not found Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.001671 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs podName:1e501664-2258-45c7-8934-7f953c7fc799 nodeName:}" failed. No retries permitted until 2026-02-16 12:48:29.001645288 +0000 UTC m=+1014.594660622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs") pod "openstack-operator-controller-manager-667bdd5bc9-lpnbm" (UID: "1e501664-2258-45c7-8934-7f953c7fc799") : secret "metrics-server-cert" not found Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.302513 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27" Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.302745 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgc2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-lz8sd_openstack-operators(12e59839-c074-42ea-84e6-1be9b5a261ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.304080 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" podUID="12e59839-c074-42ea-84e6-1be9b5a261ad" Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.383245 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:5007f87a2869468db06d6257c17e389b587a095a087466c69c0c92328e699546\\\"\"" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" podUID="5cc692f7-262b-4ffa-b259-69f665422e8d" Feb 16 12:48:13 crc kubenswrapper[4799]: E0216 12:48:13.385239 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" podUID="12e59839-c074-42ea-84e6-1be9b5a261ad" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.002578 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.003225 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpb5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-8r6qg_openstack-operators(17536931-400e-4131-8992-a30c2ebda385): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.004416 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" podUID="17536931-400e-4131-8992-a30c2ebda385" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.393446 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" podUID="17536931-400e-4131-8992-a30c2ebda385" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.589597 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.589855 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slwtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-686fx_openstack-operators(9ec15942-7ca3-444c-a096-a23c21b701ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.591036 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" podUID="9ec15942-7ca3-444c-a096-a23c21b701ed" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.652976 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.653060 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.653273 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.119:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsz27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7f65d44ccf-htwqf_openstack-operators(0935892b-89a7-4b63-8012-dbe285c5a2f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:48:14 crc kubenswrapper[4799]: E0216 12:48:14.656310 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" podUID="0935892b-89a7-4b63-8012-dbe285c5a2f3" Feb 16 12:48:15 crc kubenswrapper[4799]: E0216 12:48:15.412165 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" podUID="0935892b-89a7-4b63-8012-dbe285c5a2f3" Feb 16 12:48:15 crc kubenswrapper[4799]: E0216 12:48:15.413214 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" podUID="9ec15942-7ca3-444c-a096-a23c21b701ed" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.456997 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" event={"ID":"692956be-1d06-489c-9a30-0f7e4e144caa","Type":"ContainerStarted","Data":"e4d34dc364dff297e26c229c7bca74f99888362a53faa86281c34256d17b68f2"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.466386 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" event={"ID":"e555e0d9-b9d6-4e25-ad40-c6d9c1cae800","Type":"ContainerStarted","Data":"c64b155c33b3add302106b2709319dc2654c263e7d98b83e4be1841afeef31e5"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.466538 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.470317 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" event={"ID":"1328d15a-4b40-4db9-b0f8-0c8490e623b9","Type":"ContainerStarted","Data":"caa547029dbe11f6e0d13e9d42b9b3523c58a4902aed0489e63b33165afa111f"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.471357 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.476226 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" event={"ID":"8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379","Type":"ContainerStarted","Data":"9c544286f47c16b67275dd20a2b319cfafcc4a895bef66a2f835cf99b81aa900"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.476359 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.478499 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" event={"ID":"b7dcb594-1126-4b75-8f5d-d2b5edc9ccad","Type":"ContainerStarted","Data":"c51cd3c7cf7593dfac8f26bc6bb691e304aeb7820b10e846bd8fc60e3031f066"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.478640 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.481264 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" event={"ID":"1c684efb-e592-4c17-a896-897b466cd387","Type":"ContainerStarted","Data":"329fe3a46ecdbb84e78c8c08420d9d16fab1697e00c675f1e7aeb42b512473af"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.481750 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.483072 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" event={"ID":"12dbbffb-b10a-4b02-9698-fa66c5ff9451","Type":"ContainerStarted","Data":"716cb11486c16a6930be9de452c2b7578c6553fd7f6a3514355a5226a237ec12"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.485916 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.487203 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" event={"ID":"f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf","Type":"ContainerStarted","Data":"0b36793761dc7d49bdd73bcf9934c514644affa642c8c3d15ff381f6e39800b0"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.487419 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.489486 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" event={"ID":"3278a4bc-c2fa-4672-9a31-f53b0e95dbcd","Type":"ContainerStarted","Data":"b05927b5863c6ad7ee59d1126836f3031d2af0e8ae47b25d03fe4d565c114917"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.490705 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.491796 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" event={"ID":"b286a989-7544-4596-bb1b-f06469aedbdc","Type":"ContainerStarted","Data":"f06f23bc42c79d119a6f28c1eef4ff147678a93b3f62040598611344e0808b0d"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.492377 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.494152 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" event={"ID":"ec674ea8-aa42-4917-906f-9a9b098ba2c0","Type":"ContainerStarted","Data":"338aad9c53f65773567f7d82c400a0d9a89806156586a80b3ecc47fc76ee2d0a"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.495041 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.497256 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" event={"ID":"7333b2fd-d81d-4daa-965a-3d5fefca8863","Type":"ContainerStarted","Data":"4311606d59235b0ff1343692e974b060963d74e8ec16fead4315c0d841ac469e"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.498250 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.500643 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" event={"ID":"bd478887-eb50-4e9c-8933-7b513c323cac","Type":"ContainerStarted","Data":"abd1501b2c799be446665ebc5916e96eb2b3538ff85f2807da844c56e936175c"} Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.501764 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.564013 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hrpbx" podStartSLOduration=3.109204557 podStartE2EDuration="22.563985062s" podCreationTimestamp="2026-02-16 12:47:57 +0000 UTC" firstStartedPulling="2026-02-16 12:47:59.090758324 +0000 UTC m=+984.683773648" lastFinishedPulling="2026-02-16 12:48:18.545538819 +0000 UTC m=+1004.138554153" observedRunningTime="2026-02-16 12:48:19.503604361 +0000 UTC m=+1005.096619695" watchObservedRunningTime="2026-02-16 12:48:19.563985062 +0000 UTC m=+1005.157000396" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.577618 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" podStartSLOduration=4.663401164 podStartE2EDuration="23.577593338s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.88207047 +0000 UTC m=+984.475085804" lastFinishedPulling="2026-02-16 12:48:17.796262644 +0000 UTC m=+1003.389277978" observedRunningTime="2026-02-16 12:48:19.557469918 +0000 UTC m=+1005.150485252" watchObservedRunningTime="2026-02-16 12:48:19.577593338 +0000 UTC m=+1005.170608672" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.604462 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" podStartSLOduration=3.654689463 podStartE2EDuration="23.604441539s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:57.845510418 +0000 UTC m=+983.438525752" lastFinishedPulling="2026-02-16 12:48:17.795262494 +0000 UTC m=+1003.388277828" observedRunningTime="2026-02-16 12:48:19.601973135 +0000 UTC m=+1005.194988469" watchObservedRunningTime="2026-02-16 12:48:19.604441539 +0000 UTC m=+1005.197456873" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.714458 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" podStartSLOduration=5.800382272 podStartE2EDuration="23.714434309s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.033359071 +0000 UTC m=+983.626374405" lastFinishedPulling="2026-02-16 12:48:15.947411108 +0000 UTC m=+1001.540426442" observedRunningTime="2026-02-16 12:48:19.709067569 +0000 UTC m=+1005.302082903" watchObservedRunningTime="2026-02-16 12:48:19.714434309 +0000 UTC m=+1005.307449633" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.841200 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" podStartSLOduration=5.0137186 podStartE2EDuration="23.841169477s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.966467017 +0000 UTC m=+984.559482351" lastFinishedPulling="2026-02-16 12:48:17.793917894 +0000 UTC m=+1003.386933228" observedRunningTime="2026-02-16 12:48:19.791771784 +0000 UTC m=+1005.384787118" watchObservedRunningTime="2026-02-16 12:48:19.841169477 +0000 UTC m=+1005.434184811" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.842954 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" podStartSLOduration=7.726618274 podStartE2EDuration="23.84294254s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.449710566 +0000 UTC m=+984.042725900" lastFinishedPulling="2026-02-16 12:48:14.566034832 +0000 UTC m=+1000.159050166" observedRunningTime="2026-02-16 12:48:19.841386344 +0000 UTC m=+1005.434401678" watchObservedRunningTime="2026-02-16 12:48:19.84294254 +0000 UTC m=+1005.435957874" Feb 16 12:48:19 crc kubenswrapper[4799]: I0216 12:48:19.910949 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" podStartSLOduration=6.888683766 podStartE2EDuration="23.910924408s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.925240738 +0000 UTC m=+984.518256072" lastFinishedPulling="2026-02-16 12:48:15.94748137 +0000 UTC m=+1001.540496714" observedRunningTime="2026-02-16 12:48:19.889448167 +0000 UTC m=+1005.482463501" watchObservedRunningTime="2026-02-16 12:48:19.910924408 +0000 UTC m=+1005.503939742" Feb 16 12:48:20 crc kubenswrapper[4799]: I0216 12:48:20.039867 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" podStartSLOduration=4.462223023 podStartE2EDuration="24.039842662s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:59.099788603 +0000 UTC m=+984.692803937" lastFinishedPulling="2026-02-16 12:48:18.677408242 +0000 UTC m=+1004.270423576" observedRunningTime="2026-02-16 12:48:19.97303285 +0000 UTC m=+1005.566048184" watchObservedRunningTime="2026-02-16 12:48:20.039842662 +0000 UTC m=+1005.632857996" Feb 16 12:48:20 crc kubenswrapper[4799]: I0216 12:48:20.042681 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" podStartSLOduration=4.662309911 podStartE2EDuration="24.042637086s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.413499406 +0000 UTC m=+984.006514740" lastFinishedPulling="2026-02-16 12:48:17.793826581 +0000 UTC m=+1003.386841915" observedRunningTime="2026-02-16 12:48:20.039409269 +0000 UTC m=+1005.632424603" watchObservedRunningTime="2026-02-16 12:48:20.042637086 +0000 UTC m=+1005.635652420" Feb 16 12:48:20 crc kubenswrapper[4799]: I0216 12:48:20.102906 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" podStartSLOduration=4.5255942220000005 podStartE2EDuration="24.102881892s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.97093645 +0000 UTC m=+984.563951784" lastFinishedPulling="2026-02-16 12:48:18.54822412 +0000 UTC m=+1004.141239454" observedRunningTime="2026-02-16 12:48:20.100044578 +0000 UTC m=+1005.693059912" watchObservedRunningTime="2026-02-16 12:48:20.102881892 +0000 UTC m=+1005.695897226" Feb 16 12:48:20 crc kubenswrapper[4799]: I0216 12:48:20.164165 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" podStartSLOduration=8.02820544 podStartE2EDuration="24.164135799s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:57.835737897 +0000 UTC m=+983.428753231" lastFinishedPulling="2026-02-16 12:48:13.971668246 +0000 UTC m=+999.564683590" observedRunningTime="2026-02-16 12:48:20.162895122 +0000 UTC m=+1005.755910456" watchObservedRunningTime="2026-02-16 12:48:20.164135799 +0000 UTC m=+1005.757151133" Feb 16 12:48:20 crc kubenswrapper[4799]: I0216 12:48:20.205201 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" podStartSLOduration=7.182423326 podStartE2EDuration="24.205177993s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.9246395 +0000 UTC m=+984.517654834" lastFinishedPulling="2026-02-16 12:48:15.947394167 +0000 UTC m=+1001.540409501" observedRunningTime="2026-02-16 12:48:20.202593406 +0000 UTC m=+1005.795608740" watchObservedRunningTime="2026-02-16 12:48:20.205177993 +0000 UTC m=+1005.798193327" Feb 16 12:48:20 crc kubenswrapper[4799]: I0216 12:48:20.255713 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" podStartSLOduration=4.679712589 podStartE2EDuration="24.255685369s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.971639461 +0000 UTC m=+984.564654795" lastFinishedPulling="2026-02-16 12:48:18.547612241 +0000 UTC m=+1004.140627575" observedRunningTime="2026-02-16 12:48:20.252946888 +0000 UTC m=+1005.845962222" watchObservedRunningTime="2026-02-16 12:48:20.255685369 +0000 UTC m=+1005.848700703" Feb 16 12:48:21 crc kubenswrapper[4799]: I0216 12:48:21.793256 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:48:21 crc kubenswrapper[4799]: I0216 12:48:21.793341 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:48:22 crc kubenswrapper[4799]: I0216 12:48:22.154728 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:48:24 crc kubenswrapper[4799]: I0216 12:48:24.538939 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" event={"ID":"c8106c68-2300-410d-94fc-5dc71651dba5","Type":"ContainerStarted","Data":"f5b230e9d3c4610b7c49f20c6b5a72222d1df792ce7faccfd9a5143be0931075"} Feb 16 12:48:24 crc kubenswrapper[4799]: I0216 12:48:24.540180 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:48:24 crc kubenswrapper[4799]: I0216 12:48:24.561618 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" podStartSLOduration=3.342145371 podStartE2EDuration="28.561590822s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.446696406 +0000 UTC m=+984.039711730" lastFinishedPulling="2026-02-16 12:48:23.666141847 +0000 UTC m=+1009.259157181" observedRunningTime="2026-02-16 12:48:24.554958234 +0000 UTC m=+1010.147973578" watchObservedRunningTime="2026-02-16 12:48:24.561590822 +0000 UTC m=+1010.154606156" Feb 16 12:48:25 crc kubenswrapper[4799]: I0216 12:48:25.548101 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" event={"ID":"fb144fe6-dbb4-492a-acb1-b642ea0a20f0","Type":"ContainerStarted","Data":"1a71adc3143052335df256efcd9c652f9607bb63a100024da09f348cea9d095b"} Feb 16 12:48:25 crc kubenswrapper[4799]: I0216 12:48:25.549549 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:48:25 crc kubenswrapper[4799]: I0216 12:48:25.568536 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" podStartSLOduration=3.925437187 podStartE2EDuration="29.568509851s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.969197649 +0000 UTC m=+984.562213083" lastFinishedPulling="2026-02-16 12:48:24.612270413 +0000 UTC m=+1010.205285747" observedRunningTime="2026-02-16 12:48:25.563424749 +0000 UTC m=+1011.156440083" watchObservedRunningTime="2026-02-16 12:48:25.568509851 +0000 UTC m=+1011.161525195" Feb 16 12:48:26 crc kubenswrapper[4799]: I0216 12:48:26.596933 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-lzptd" Feb 16 12:48:26 crc kubenswrapper[4799]: I0216 12:48:26.602189 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-zh76r" Feb 16 12:48:26 crc kubenswrapper[4799]: I0216 12:48:26.764023 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-m6t96" Feb 16 12:48:26 crc kubenswrapper[4799]: I0216 12:48:26.766361 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-cq9hr" Feb 16 12:48:26 crc kubenswrapper[4799]: I0216 12:48:26.869870 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-dqssm" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.027449 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-g4fg8" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.054791 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-lwlqz" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.083069 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4g8xm" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.108274 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-5trbx" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.143467 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-rv7cl" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.446788 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-fhf99" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.473645 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-6fhfw" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.577412 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" event={"ID":"9ec15942-7ca3-444c-a096-a23c21b701ed","Type":"ContainerStarted","Data":"d29d1ecae5108555b80e51d9cf4b277970238eacfa0edfa804ec61892b2f6045"} Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.577666 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.580135 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" event={"ID":"12e59839-c074-42ea-84e6-1be9b5a261ad","Type":"ContainerStarted","Data":"e164c4749f6e4397b5f4feaf83dd5f867cdc1ab9688cb6eccbf05bbb1dae931d"} Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.580480 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.596434 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" podStartSLOduration=3.7622766199999997 podStartE2EDuration="31.596405717s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.882663608 +0000 UTC m=+984.475678942" lastFinishedPulling="2026-02-16 12:48:26.716792705 +0000 UTC m=+1012.309808039" observedRunningTime="2026-02-16 12:48:27.593751218 +0000 UTC m=+1013.186766552" watchObservedRunningTime="2026-02-16 12:48:27.596405717 +0000 UTC m=+1013.189421051" Feb 16 12:48:27 crc kubenswrapper[4799]: I0216 12:48:27.613601 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" podStartSLOduration=3.820779765 podStartE2EDuration="31.613569519s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.925209597 +0000 UTC m=+984.518224931" lastFinishedPulling="2026-02-16 12:48:26.717999351 +0000 UTC m=+1012.311014685" observedRunningTime="2026-02-16 12:48:27.613031853 +0000 UTC m=+1013.206047187" watchObservedRunningTime="2026-02-16 12:48:27.613569519 +0000 UTC m=+1013.206584883" Feb 16 12:48:28 crc kubenswrapper[4799]: I0216 12:48:28.641810 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:28 crc kubenswrapper[4799]: I0216 12:48:28.654893 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae60b108-5e33-408f-a861-8e2e1e9ab643-cert\") pod \"infra-operator-controller-manager-66d6b5f488-gt66t\" (UID: \"ae60b108-5e33-408f-a861-8e2e1e9ab643\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:28 crc kubenswrapper[4799]: I0216 12:48:28.845737 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:28 crc kubenswrapper[4799]: I0216 12:48:28.850562 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3469cc9e-8b93-4c52-957a-78b91019767d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5\" (UID: \"3469cc9e-8b93-4c52-957a-78b91019767d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:28 crc kubenswrapper[4799]: I0216 12:48:28.870178 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.031935 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.049871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.049945 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.057392 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-metrics-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.058001 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1e501664-2258-45c7-8934-7f953c7fc799-webhook-certs\") pod \"openstack-operator-controller-manager-667bdd5bc9-lpnbm\" (UID: \"1e501664-2258-45c7-8934-7f953c7fc799\") " pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.138671 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.344626 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t"] Feb 16 12:48:29 crc kubenswrapper[4799]: W0216 12:48:29.356871 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae60b108_5e33_408f_a861_8e2e1e9ab643.slice/crio-46b478c62a2b3e66b87e479181abe9a2d2384821567c79dbd1ee14fbe9e08ecb WatchSource:0}: Error finding container 46b478c62a2b3e66b87e479181abe9a2d2384821567c79dbd1ee14fbe9e08ecb: Status 404 returned error can't find the container with id 46b478c62a2b3e66b87e479181abe9a2d2384821567c79dbd1ee14fbe9e08ecb Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.517074 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5"] Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.597873 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" event={"ID":"ae60b108-5e33-408f-a861-8e2e1e9ab643","Type":"ContainerStarted","Data":"46b478c62a2b3e66b87e479181abe9a2d2384821567c79dbd1ee14fbe9e08ecb"} Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.599512 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" event={"ID":"3469cc9e-8b93-4c52-957a-78b91019767d","Type":"ContainerStarted","Data":"85f4b283dc506e22a8549bbee6504b563211ce6107b9617cb38c7840c75ff288"} Feb 16 12:48:29 crc kubenswrapper[4799]: I0216 12:48:29.642477 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm"] Feb 16 12:48:29 crc kubenswrapper[4799]: W0216 12:48:29.652256 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e501664_2258_45c7_8934_7f953c7fc799.slice/crio-0d972068f096e31c55c7103dc2d436dc8ce695929af447d8b8f8d07485729cb2 WatchSource:0}: Error finding container 0d972068f096e31c55c7103dc2d436dc8ce695929af447d8b8f8d07485729cb2: Status 404 returned error can't find the container with id 0d972068f096e31c55c7103dc2d436dc8ce695929af447d8b8f8d07485729cb2 Feb 16 12:48:30 crc kubenswrapper[4799]: I0216 12:48:30.612202 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" event={"ID":"1e501664-2258-45c7-8934-7f953c7fc799","Type":"ContainerStarted","Data":"0d972068f096e31c55c7103dc2d436dc8ce695929af447d8b8f8d07485729cb2"} Feb 16 12:48:35 crc kubenswrapper[4799]: I0216 12:48:35.660972 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" event={"ID":"1e501664-2258-45c7-8934-7f953c7fc799","Type":"ContainerStarted","Data":"8ef8a1441ca7869e1e3306ab569bc7feb858c9ab1ade47ba1e577a0ec7ad1bf6"} Feb 16 12:48:35 crc kubenswrapper[4799]: I0216 12:48:35.662031 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:35 crc kubenswrapper[4799]: I0216 12:48:35.699877 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" podStartSLOduration=39.6998493 podStartE2EDuration="39.6998493s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:48:35.693958794 +0000 UTC m=+1021.286974128" watchObservedRunningTime="2026-02-16 12:48:35.6998493 +0000 UTC m=+1021.292864634" Feb 16 12:48:36 crc kubenswrapper[4799]: I0216 12:48:36.732493 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-z9x44" Feb 16 12:48:36 crc kubenswrapper[4799]: I0216 12:48:36.840886 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-686fx" Feb 16 12:48:36 crc kubenswrapper[4799]: I0216 12:48:36.854830 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-jb5fm" Feb 16 12:48:37 crc kubenswrapper[4799]: I0216 12:48:37.278540 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lz8sd" Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.696755 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" event={"ID":"5cc692f7-262b-4ffa-b259-69f665422e8d","Type":"ContainerStarted","Data":"bec4c54e9fbd66f59d9b21c37752c2fd328cc2b5f36621f4ce71ef4aed891a75"} Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.697851 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.699027 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" event={"ID":"0935892b-89a7-4b63-8012-dbe285c5a2f3","Type":"ContainerStarted","Data":"c9bf9ec826fa3619ca221da9221f43066503607607f23923ab8bb86da132bbd2"} Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.706715 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" event={"ID":"17536931-400e-4131-8992-a30c2ebda385","Type":"ContainerStarted","Data":"676ef3c909a4d99f4a2a418d3b078ee41cf058c0a54d6cd5f4bd9dff391ac134"} Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.707700 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.744876 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" podStartSLOduration=2.830344439 podStartE2EDuration="42.744853399s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.020733774 +0000 UTC m=+983.613749108" lastFinishedPulling="2026-02-16 12:48:37.935242734 +0000 UTC m=+1023.528258068" observedRunningTime="2026-02-16 12:48:38.723543473 +0000 UTC m=+1024.316558817" watchObservedRunningTime="2026-02-16 12:48:38.744853399 +0000 UTC m=+1024.337868733" Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.751181 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" podStartSLOduration=3.888615688 podStartE2EDuration="42.751157257s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:59.066758508 +0000 UTC m=+984.659773842" lastFinishedPulling="2026-02-16 12:48:37.929300077 +0000 UTC m=+1023.522315411" observedRunningTime="2026-02-16 12:48:38.744455397 +0000 UTC m=+1024.337470761" watchObservedRunningTime="2026-02-16 12:48:38.751157257 +0000 UTC m=+1024.344172601" Feb 16 12:48:38 crc kubenswrapper[4799]: I0216 12:48:38.766611 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" podStartSLOduration=3.805253633 podStartE2EDuration="42.766585687s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:47:58.968093286 +0000 UTC m=+984.561108620" lastFinishedPulling="2026-02-16 12:48:37.92942534 +0000 UTC m=+1023.522440674" observedRunningTime="2026-02-16 12:48:38.759163085 +0000 UTC m=+1024.352178439" watchObservedRunningTime="2026-02-16 12:48:38.766585687 +0000 UTC m=+1024.359601021" Feb 16 12:48:40 crc kubenswrapper[4799]: I0216 12:48:40.735475 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" event={"ID":"ae60b108-5e33-408f-a861-8e2e1e9ab643","Type":"ContainerStarted","Data":"f196830d0c63a9adf74eb64980b53a5df053abac76d8531b7f699b83e294c6c0"} Feb 16 12:48:40 crc kubenswrapper[4799]: I0216 12:48:40.736432 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:40 crc kubenswrapper[4799]: I0216 12:48:40.748352 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" event={"ID":"3469cc9e-8b93-4c52-957a-78b91019767d","Type":"ContainerStarted","Data":"2495a36cf4a0086e7e3430231959769bdd20201c0c104e632c46e8129b37e5e6"} Feb 16 12:48:40 crc kubenswrapper[4799]: I0216 12:48:40.749935 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:40 crc kubenswrapper[4799]: I0216 12:48:40.768598 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" podStartSLOduration=33.980145586 podStartE2EDuration="44.768577121s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:48:29.35994764 +0000 UTC m=+1014.952962974" lastFinishedPulling="2026-02-16 12:48:40.148379175 +0000 UTC m=+1025.741394509" observedRunningTime="2026-02-16 12:48:40.767173539 +0000 UTC m=+1026.360188903" watchObservedRunningTime="2026-02-16 12:48:40.768577121 +0000 UTC m=+1026.361592475" Feb 16 12:48:40 crc kubenswrapper[4799]: I0216 12:48:40.807810 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" podStartSLOduration=34.214525025 podStartE2EDuration="44.8077778s" podCreationTimestamp="2026-02-16 12:47:56 +0000 UTC" firstStartedPulling="2026-02-16 12:48:29.532619999 +0000 UTC m=+1015.125635333" lastFinishedPulling="2026-02-16 12:48:40.125872774 +0000 UTC m=+1025.718888108" observedRunningTime="2026-02-16 12:48:40.791881186 +0000 UTC m=+1026.384896530" watchObservedRunningTime="2026-02-16 12:48:40.8077778 +0000 UTC m=+1026.400793144" Feb 16 12:48:46 crc kubenswrapper[4799]: I0216 12:48:46.638348 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-ddwg6" Feb 16 12:48:47 crc kubenswrapper[4799]: I0216 12:48:47.069162 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-8r6qg" Feb 16 12:48:47 crc kubenswrapper[4799]: I0216 12:48:47.294084 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:48:47 crc kubenswrapper[4799]: I0216 12:48:47.297634 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7f65d44ccf-htwqf" Feb 16 12:48:48 crc kubenswrapper[4799]: I0216 12:48:48.877695 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-gt66t" Feb 16 12:48:49 crc kubenswrapper[4799]: I0216 12:48:49.041303 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5" Feb 16 12:48:49 crc kubenswrapper[4799]: I0216 12:48:49.145283 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-667bdd5bc9-lpnbm" Feb 16 12:48:51 crc kubenswrapper[4799]: I0216 12:48:51.793232 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:48:51 crc kubenswrapper[4799]: I0216 12:48:51.793688 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.377158 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8ff9c764f-69mxb"] Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.379085 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.384361 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.384584 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.384621 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.390961 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w74dh" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.398482 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8ff9c764f-69mxb"] Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.452075 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68587c85b9-8s9m4"] Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.453133 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.455943 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.478191 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68587c85b9-8s9m4"] Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.507098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlj5\" (UniqueName: \"kubernetes.io/projected/e7590a55-e3d2-405c-9bcd-cb730502555e-kube-api-access-8wlj5\") pod \"dnsmasq-dns-8ff9c764f-69mxb\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.507180 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7590a55-e3d2-405c-9bcd-cb730502555e-config\") pod \"dnsmasq-dns-8ff9c764f-69mxb\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.608434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlj5\" (UniqueName: \"kubernetes.io/projected/e7590a55-e3d2-405c-9bcd-cb730502555e-kube-api-access-8wlj5\") pod \"dnsmasq-dns-8ff9c764f-69mxb\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.608518 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh46s\" (UniqueName: \"kubernetes.io/projected/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-kube-api-access-zh46s\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.608558 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7590a55-e3d2-405c-9bcd-cb730502555e-config\") pod \"dnsmasq-dns-8ff9c764f-69mxb\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.608622 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-config\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.608672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-dns-svc\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.609885 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7590a55-e3d2-405c-9bcd-cb730502555e-config\") pod \"dnsmasq-dns-8ff9c764f-69mxb\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.629281 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlj5\" (UniqueName: \"kubernetes.io/projected/e7590a55-e3d2-405c-9bcd-cb730502555e-kube-api-access-8wlj5\") pod \"dnsmasq-dns-8ff9c764f-69mxb\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.709735 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-config\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.709800 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-dns-svc\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.709885 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh46s\" (UniqueName: \"kubernetes.io/projected/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-kube-api-access-zh46s\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.711913 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-dns-svc\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.712018 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-config\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.724795 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.741210 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh46s\" (UniqueName: \"kubernetes.io/projected/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-kube-api-access-zh46s\") pod \"dnsmasq-dns-68587c85b9-8s9m4\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:07 crc kubenswrapper[4799]: I0216 12:49:07.772495 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:08 crc kubenswrapper[4799]: I0216 12:49:08.193241 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8ff9c764f-69mxb"] Feb 16 12:49:08 crc kubenswrapper[4799]: I0216 12:49:08.265675 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68587c85b9-8s9m4"] Feb 16 12:49:08 crc kubenswrapper[4799]: W0216 12:49:08.270979 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb822f1c_1af0_4f20_bfa4_caa95ca22c42.slice/crio-066726fe625daeec65a82cdc787d75152dc2ce99b7e5e36d76f054c70d22bf96 WatchSource:0}: Error finding container 066726fe625daeec65a82cdc787d75152dc2ce99b7e5e36d76f054c70d22bf96: Status 404 returned error can't find the container with id 066726fe625daeec65a82cdc787d75152dc2ce99b7e5e36d76f054c70d22bf96 Feb 16 12:49:09 crc kubenswrapper[4799]: I0216 12:49:09.013998 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" event={"ID":"eb822f1c-1af0-4f20-bfa4-caa95ca22c42","Type":"ContainerStarted","Data":"066726fe625daeec65a82cdc787d75152dc2ce99b7e5e36d76f054c70d22bf96"} Feb 16 12:49:09 crc kubenswrapper[4799]: I0216 12:49:09.015235 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" event={"ID":"e7590a55-e3d2-405c-9bcd-cb730502555e","Type":"ContainerStarted","Data":"7997ee67d561cf34f6022ea6970b9464f74c299a556c340c6ebf6da600984ca4"} Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.058227 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68587c85b9-8s9m4"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.103565 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6545cfffb5-pgxg2"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.104939 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.112110 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6545cfffb5-pgxg2"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.269305 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-config\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.269355 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.269497 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bn7r\" (UniqueName: \"kubernetes.io/projected/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-kube-api-access-8bn7r\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.370821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bn7r\" (UniqueName: \"kubernetes.io/projected/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-kube-api-access-8bn7r\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.370911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-config\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.370930 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.371793 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.372361 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-config\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.376697 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8ff9c764f-69mxb"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.398100 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bn7r\" (UniqueName: \"kubernetes.io/projected/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-kube-api-access-8bn7r\") pod \"dnsmasq-dns-6545cfffb5-pgxg2\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.406793 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c5c99497-snwfh"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.408034 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.415562 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c5c99497-snwfh"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.442767 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.577304 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-dns-svc\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.577411 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnvd\" (UniqueName: \"kubernetes.io/projected/79039adc-b677-4066-8832-95e2589654d5-kube-api-access-wtnvd\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.577484 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-config\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.678936 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-config\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.679021 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-dns-svc\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.679080 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnvd\" (UniqueName: \"kubernetes.io/projected/79039adc-b677-4066-8832-95e2589654d5-kube-api-access-wtnvd\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.680982 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-config\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.681060 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-dns-svc\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.701480 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6545cfffb5-pgxg2"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.711309 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnvd\" (UniqueName: \"kubernetes.io/projected/79039adc-b677-4066-8832-95e2589654d5-kube-api-access-wtnvd\") pod \"dnsmasq-dns-78c5c99497-snwfh\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.737912 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f8f5886f-zzspb"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.739151 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.746668 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f8f5886f-zzspb"] Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.752471 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.882603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlw7w\" (UniqueName: \"kubernetes.io/projected/21585780-9181-47a1-beb1-72cbd9970fb9-kube-api-access-jlw7w\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.882673 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-dns-svc\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.882698 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-config\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.984713 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlw7w\" (UniqueName: \"kubernetes.io/projected/21585780-9181-47a1-beb1-72cbd9970fb9-kube-api-access-jlw7w\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.984766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-dns-svc\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.984793 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-config\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.985745 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-config\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:11 crc kubenswrapper[4799]: I0216 12:49:11.985762 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-dns-svc\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.008020 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlw7w\" (UniqueName: \"kubernetes.io/projected/21585780-9181-47a1-beb1-72cbd9970fb9-kube-api-access-jlw7w\") pod \"dnsmasq-dns-69f8f5886f-zzspb\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.069505 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.240561 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.242517 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.251516 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.251584 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.251704 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.251759 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.251914 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.251967 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zghbx" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.252152 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.258663 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391147 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391199 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391219 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391235 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5br\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-kube-api-access-bs5br\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391429 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391457 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8af3fbd4-c626-4920-915d-0f50d12662b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391486 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391623 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391741 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8af3fbd4-c626-4920-915d-0f50d12662b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391909 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.391944 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.493909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.493962 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.493997 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8af3fbd4-c626-4920-915d-0f50d12662b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494036 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494056 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494182 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494233 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494325 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5br\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-kube-api-access-bs5br\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494547 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494578 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8af3fbd4-c626-4920-915d-0f50d12662b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.494630 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.495138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.495424 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.495633 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.495801 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.497457 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.499447 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.501147 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8af3fbd4-c626-4920-915d-0f50d12662b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.502660 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8af3fbd4-c626-4920-915d-0f50d12662b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.502978 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.515317 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.538225 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.539335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5br\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-kube-api-access-bs5br\") pod \"rabbitmq-server-0\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.539455 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.546999 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.547260 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7r8ht" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.547466 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.547293 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.548100 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.549036 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.551905 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.575633 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.588503 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.697766 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.697830 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.697879 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.697909 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.697932 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.697976 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.698000 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.698015 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.698029 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.698053 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.698076 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk49s\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-kube-api-access-wk49s\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799275 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799357 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799386 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799414 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799469 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799493 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799507 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799520 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799542 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799564 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk49s\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-kube-api-access-wk49s\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799588 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.799798 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.800273 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.800356 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.800657 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.806954 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.807852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.808813 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.811233 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.815819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.817910 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk49s\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-kube-api-access-wk49s\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.819835 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.832843 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.833496 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.834723 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.842768 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.843139 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.843400 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-vmx6w" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.843691 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.843870 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.844019 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.844347 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.856717 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Feb 16 12:49:12 crc kubenswrapper[4799]: I0216 12:49:12.901322 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.002658 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.002721 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.004645 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b6ff320-8742-454a-9a6e-766db7e2c3a8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005134 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005162 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005229 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005264 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm6gr\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-kube-api-access-rm6gr\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005332 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b6ff320-8742-454a-9a6e-766db7e2c3a8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.005768 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107178 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107230 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107274 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b6ff320-8742-454a-9a6e-766db7e2c3a8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107292 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107314 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107340 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107363 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b6ff320-8742-454a-9a6e-766db7e2c3a8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107387 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107404 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107437 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.107462 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm6gr\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-kube-api-access-rm6gr\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.108416 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.108459 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.108562 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.108821 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.109294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.109446 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b6ff320-8742-454a-9a6e-766db7e2c3a8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.111965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.114149 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.114228 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b6ff320-8742-454a-9a6e-766db7e2c3a8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.123266 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b6ff320-8742-454a-9a6e-766db7e2c3a8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.126381 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm6gr\" (UniqueName: \"kubernetes.io/projected/5b6ff320-8742-454a-9a6e-766db7e2c3a8-kube-api-access-rm6gr\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.128852 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5b6ff320-8742-454a-9a6e-766db7e2c3a8\") " pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.197611 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.996560 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 12:49:13 crc kubenswrapper[4799]: I0216 12:49:13.998766 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.001553 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.002201 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.002952 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.003042 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-c64g9" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.008769 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.021878 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.130337 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-config-data-default\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.130485 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h2tt\" (UniqueName: \"kubernetes.io/projected/19d52513-0bac-433d-8167-3abd90820fff-kube-api-access-9h2tt\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.130538 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19d52513-0bac-433d-8167-3abd90820fff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.130642 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d52513-0bac-433d-8167-3abd90820fff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.130861 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.130924 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d52513-0bac-433d-8167-3abd90820fff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.131007 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-kolla-config\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.131062 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232538 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d52513-0bac-433d-8167-3abd90820fff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232611 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-kolla-config\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232634 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232709 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-config-data-default\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232748 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h2tt\" (UniqueName: \"kubernetes.io/projected/19d52513-0bac-433d-8167-3abd90820fff-kube-api-access-9h2tt\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232765 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19d52513-0bac-433d-8167-3abd90820fff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232828 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d52513-0bac-433d-8167-3abd90820fff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232867 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.232918 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.233466 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-kolla-config\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.234330 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-config-data-default\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.234389 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d52513-0bac-433d-8167-3abd90820fff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.234365 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19d52513-0bac-433d-8167-3abd90820fff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.239135 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d52513-0bac-433d-8167-3abd90820fff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.279980 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.286615 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d52513-0bac-433d-8167-3abd90820fff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.287312 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h2tt\" (UniqueName: \"kubernetes.io/projected/19d52513-0bac-433d-8167-3abd90820fff-kube-api-access-9h2tt\") pod \"openstack-galera-0\" (UID: \"19d52513-0bac-433d-8167-3abd90820fff\") " pod="openstack/openstack-galera-0" Feb 16 12:49:14 crc kubenswrapper[4799]: I0216 12:49:14.329517 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.519671 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.521180 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.523933 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.524060 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.524510 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fhq9t" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.524705 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.533681 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658042 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ddc5ff-d6d1-4997-8763-e97603e7df10-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658181 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ddc5ff-d6d1-4997-8763-e97603e7df10-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658206 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658236 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658348 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4cxt\" (UniqueName: \"kubernetes.io/projected/06ddc5ff-d6d1-4997-8763-e97603e7df10-kube-api-access-w4cxt\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.658522 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06ddc5ff-d6d1-4997-8763-e97603e7df10-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.667509 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.668753 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.676114 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.676385 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.677478 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mvh4j" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.679471 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.759829 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ddc5ff-d6d1-4997-8763-e97603e7df10-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.759898 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760462 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68cb9f4-b04b-4b52-92e0-153239877a17-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760501 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f68cb9f4-b04b-4b52-92e0-153239877a17-config-data\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760533 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760564 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f68cb9f4-b04b-4b52-92e0-153239877a17-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760590 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9hv\" (UniqueName: \"kubernetes.io/projected/f68cb9f4-b04b-4b52-92e0-153239877a17-kube-api-access-mt9hv\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4cxt\" (UniqueName: \"kubernetes.io/projected/06ddc5ff-d6d1-4997-8763-e97603e7df10-kube-api-access-w4cxt\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760699 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06ddc5ff-d6d1-4997-8763-e97603e7df10-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760789 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760824 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ddc5ff-d6d1-4997-8763-e97603e7df10-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f68cb9f4-b04b-4b52-92e0-153239877a17-kolla-config\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.760413 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.761684 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06ddc5ff-d6d1-4997-8763-e97603e7df10-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.762510 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.762674 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.763301 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06ddc5ff-d6d1-4997-8763-e97603e7df10-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.765843 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ddc5ff-d6d1-4997-8763-e97603e7df10-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.769896 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ddc5ff-d6d1-4997-8763-e97603e7df10-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.778025 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4cxt\" (UniqueName: \"kubernetes.io/projected/06ddc5ff-d6d1-4997-8763-e97603e7df10-kube-api-access-w4cxt\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.789560 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06ddc5ff-d6d1-4997-8763-e97603e7df10\") " pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.861822 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f68cb9f4-b04b-4b52-92e0-153239877a17-kolla-config\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.861955 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68cb9f4-b04b-4b52-92e0-153239877a17-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.861989 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f68cb9f4-b04b-4b52-92e0-153239877a17-config-data\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.862017 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f68cb9f4-b04b-4b52-92e0-153239877a17-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.862050 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9hv\" (UniqueName: \"kubernetes.io/projected/f68cb9f4-b04b-4b52-92e0-153239877a17-kube-api-access-mt9hv\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.862575 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f68cb9f4-b04b-4b52-92e0-153239877a17-kolla-config\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.864812 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f68cb9f4-b04b-4b52-92e0-153239877a17-config-data\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.867569 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f68cb9f4-b04b-4b52-92e0-153239877a17-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.868153 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68cb9f4-b04b-4b52-92e0-153239877a17-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.876790 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9hv\" (UniqueName: \"kubernetes.io/projected/f68cb9f4-b04b-4b52-92e0-153239877a17-kube-api-access-mt9hv\") pod \"memcached-0\" (UID: \"f68cb9f4-b04b-4b52-92e0-153239877a17\") " pod="openstack/memcached-0" Feb 16 12:49:15 crc kubenswrapper[4799]: I0216 12:49:15.893974 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:16 crc kubenswrapper[4799]: I0216 12:49:16.020742 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 12:49:16 crc kubenswrapper[4799]: I0216 12:49:16.378486 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 12:49:17 crc kubenswrapper[4799]: I0216 12:49:17.820725 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:49:17 crc kubenswrapper[4799]: I0216 12:49:17.822174 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 12:49:17 crc kubenswrapper[4799]: I0216 12:49:17.825559 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vh44v" Feb 16 12:49:17 crc kubenswrapper[4799]: I0216 12:49:17.895080 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:49:17 crc kubenswrapper[4799]: I0216 12:49:17.900254 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8th\" (UniqueName: \"kubernetes.io/projected/05acd04d-4502-4380-be32-5997bb43cc76-kube-api-access-sd8th\") pod \"kube-state-metrics-0\" (UID: \"05acd04d-4502-4380-be32-5997bb43cc76\") " pod="openstack/kube-state-metrics-0" Feb 16 12:49:18 crc kubenswrapper[4799]: I0216 12:49:18.004087 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8th\" (UniqueName: \"kubernetes.io/projected/05acd04d-4502-4380-be32-5997bb43cc76-kube-api-access-sd8th\") pod \"kube-state-metrics-0\" (UID: \"05acd04d-4502-4380-be32-5997bb43cc76\") " pod="openstack/kube-state-metrics-0" Feb 16 12:49:18 crc kubenswrapper[4799]: I0216 12:49:18.030809 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8th\" (UniqueName: \"kubernetes.io/projected/05acd04d-4502-4380-be32-5997bb43cc76-kube-api-access-sd8th\") pod \"kube-state-metrics-0\" (UID: \"05acd04d-4502-4380-be32-5997bb43cc76\") " pod="openstack/kube-state-metrics-0" Feb 16 12:49:18 crc kubenswrapper[4799]: I0216 12:49:18.140298 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.163551 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.165899 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.174617 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.174713 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.174827 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.174871 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.174952 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.175176 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.175323 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9r2q7" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.186439 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.190071 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226312 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226451 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226587 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwql\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-kube-api-access-kzwql\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226693 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226835 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.226976 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.227391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329286 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329375 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329403 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329441 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwql\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-kube-api-access-kzwql\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329511 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329528 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329548 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329575 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.329593 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.330337 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.331225 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.332768 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.334795 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.334857 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d8cc6eee7369a0a6de9fc43cae4068e826e1253c0ec6fd8cae0c234b0f57b7e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.334973 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.335344 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.336771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.336920 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.337866 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.347259 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwql\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-kube-api-access-kzwql\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.364335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:19 crc kubenswrapper[4799]: I0216 12:49:19.517351 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.117054 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19d52513-0bac-433d-8167-3abd90820fff","Type":"ContainerStarted","Data":"66ff6e1f00e8c26ce9bcdd9efdab0040b2d7be4bb270ec65f3a6d4dba8e2043c"} Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.953896 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wr6ph"] Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.955368 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.958685 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-74tn7" Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.959514 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.959680 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 12:49:20 crc kubenswrapper[4799]: I0216 12:49:20.968796 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph"] Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.040898 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6rnj7"] Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.042485 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.057989 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-etc-ovs\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058049 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-run\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058146 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-log\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058166 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-lib\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058182 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8e986-71a6-47cc-a34e-ddc323df4af4-combined-ca-bundle\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058212 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-run\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058231 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ng7\" (UniqueName: \"kubernetes.io/projected/46a97d94-f787-4e62-86df-1ee58bdae9ce-kube-api-access-l2ng7\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058257 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rf4\" (UniqueName: \"kubernetes.io/projected/d0a8e986-71a6-47cc-a34e-ddc323df4af4-kube-api-access-d8rf4\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058279 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a97d94-f787-4e62-86df-1ee58bdae9ce-scripts\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058300 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8e986-71a6-47cc-a34e-ddc323df4af4-ovn-controller-tls-certs\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058317 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-log-ovn\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058326 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6rnj7"] Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058341 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-run-ovn\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.058490 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0a8e986-71a6-47cc-a34e-ddc323df4af4-scripts\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-log\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159672 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-lib\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159697 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8e986-71a6-47cc-a34e-ddc323df4af4-combined-ca-bundle\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159727 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-run\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159749 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ng7\" (UniqueName: \"kubernetes.io/projected/46a97d94-f787-4e62-86df-1ee58bdae9ce-kube-api-access-l2ng7\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159782 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8rf4\" (UniqueName: \"kubernetes.io/projected/d0a8e986-71a6-47cc-a34e-ddc323df4af4-kube-api-access-d8rf4\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159810 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a97d94-f787-4e62-86df-1ee58bdae9ce-scripts\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.159838 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8e986-71a6-47cc-a34e-ddc323df4af4-ovn-controller-tls-certs\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160301 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-log-ovn\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160351 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-run\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160477 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-log-ovn\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160506 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-lib\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160518 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-run-ovn\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160570 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0a8e986-71a6-47cc-a34e-ddc323df4af4-scripts\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160614 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-etc-ovs\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-run\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160662 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-var-log\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160728 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-run-ovn\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160792 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d0a8e986-71a6-47cc-a34e-ddc323df4af4-var-run\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.160943 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/46a97d94-f787-4e62-86df-1ee58bdae9ce-etc-ovs\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.161959 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46a97d94-f787-4e62-86df-1ee58bdae9ce-scripts\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.163260 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0a8e986-71a6-47cc-a34e-ddc323df4af4-scripts\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.174878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8e986-71a6-47cc-a34e-ddc323df4af4-combined-ca-bundle\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.179864 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8e986-71a6-47cc-a34e-ddc323df4af4-ovn-controller-tls-certs\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.180052 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ng7\" (UniqueName: \"kubernetes.io/projected/46a97d94-f787-4e62-86df-1ee58bdae9ce-kube-api-access-l2ng7\") pod \"ovn-controller-ovs-6rnj7\" (UID: \"46a97d94-f787-4e62-86df-1ee58bdae9ce\") " pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.180566 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8rf4\" (UniqueName: \"kubernetes.io/projected/d0a8e986-71a6-47cc-a34e-ddc323df4af4-kube-api-access-d8rf4\") pod \"ovn-controller-wr6ph\" (UID: \"d0a8e986-71a6-47cc-a34e-ddc323df4af4\") " pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.283332 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.299732 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.304595 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.311226 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.311629 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.312532 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.312683 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.313011 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-g4jrq" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.314818 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.363145 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.364448 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.364498 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.364696 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.364828 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-config\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.364866 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pc5h\" (UniqueName: \"kubernetes.io/projected/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-kube-api-access-6pc5h\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.365039 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.365311 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.365421 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.466844 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467236 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467305 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467337 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-config\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467362 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pc5h\" (UniqueName: \"kubernetes.io/projected/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-kube-api-access-6pc5h\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467407 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467438 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.467467 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.468615 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.468609 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.468919 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.469595 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-config\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.471839 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.472039 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.475545 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.484898 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pc5h\" (UniqueName: \"kubernetes.io/projected/2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7-kube-api-access-6pc5h\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.492038 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7\") " pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.636334 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.792854 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.792915 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.792967 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.793727 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34c6876ea0db42f2332afd913f232568333ea876303d83a249ce58ef9abe96d8"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:49:21 crc kubenswrapper[4799]: I0216 12:49:21.793791 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://34c6876ea0db42f2332afd913f232568333ea876303d83a249ce58ef9abe96d8" gracePeriod=600 Feb 16 12:49:22 crc kubenswrapper[4799]: I0216 12:49:22.133612 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"34c6876ea0db42f2332afd913f232568333ea876303d83a249ce58ef9abe96d8"} Feb 16 12:49:22 crc kubenswrapper[4799]: I0216 12:49:22.133685 4799 scope.go:117] "RemoveContainer" containerID="86245d72136a5128ea7329ec812aaf474d9f9a0b7cefc3d679dd266cf69dce8f" Feb 16 12:49:22 crc kubenswrapper[4799]: I0216 12:49:22.133539 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="34c6876ea0db42f2332afd913f232568333ea876303d83a249ce58ef9abe96d8" exitCode=0 Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.911270 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.911561 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.911674 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wlj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8ff9c764f-69mxb_openstack(e7590a55-e3d2-405c-9bcd-cb730502555e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.913028 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" podUID="e7590a55-e3d2-405c-9bcd-cb730502555e" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.958197 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.958252 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.958358 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh46s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-68587c85b9-8s9m4_openstack(eb822f1c-1af0-4f20-bfa4-caa95ca22c42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:49:24 crc kubenswrapper[4799]: E0216 12:49:24.959644 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" podUID="eb822f1c-1af0-4f20-bfa4-caa95ca22c42" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.395675 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.400348 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.409973 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.410209 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.410266 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.411884 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4hwrc" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.418743 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.548663 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgzs\" (UniqueName: \"kubernetes.io/projected/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-kube-api-access-gdgzs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.548753 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.548801 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-config\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.548895 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.548998 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.549055 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.549135 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.549171 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.650930 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.650981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-config\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.651017 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.651051 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.651080 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.651118 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.651158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.651194 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdgzs\" (UniqueName: \"kubernetes.io/projected/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-kube-api-access-gdgzs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.657771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.658584 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.662848 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-config\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.663302 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.665531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.677391 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.677843 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.696416 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.713288 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdgzs\" (UniqueName: \"kubernetes.io/projected/b93c98d8-9585-4406-8d4f-54ebdb84ee2d-kube-api-access-gdgzs\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.744418 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b93c98d8-9585-4406-8d4f-54ebdb84ee2d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.855182 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-config\") pod \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.855284 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-dns-svc\") pod \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.855400 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh46s\" (UniqueName: \"kubernetes.io/projected/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-kube-api-access-zh46s\") pod \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\" (UID: \"eb822f1c-1af0-4f20-bfa4-caa95ca22c42\") " Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.856905 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-config" (OuterVolumeSpecName: "config") pod "eb822f1c-1af0-4f20-bfa4-caa95ca22c42" (UID: "eb822f1c-1af0-4f20-bfa4-caa95ca22c42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.857510 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb822f1c-1af0-4f20-bfa4-caa95ca22c42" (UID: "eb822f1c-1af0-4f20-bfa4-caa95ca22c42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.859472 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-kube-api-access-zh46s" (OuterVolumeSpecName: "kube-api-access-zh46s") pod "eb822f1c-1af0-4f20-bfa4-caa95ca22c42" (UID: "eb822f1c-1af0-4f20-bfa4-caa95ca22c42"). InnerVolumeSpecName "kube-api-access-zh46s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.957580 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.957616 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:25 crc kubenswrapper[4799]: I0216 12:49:25.957628 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh46s\" (UniqueName: \"kubernetes.io/projected/eb822f1c-1af0-4f20-bfa4-caa95ca22c42-kube-api-access-zh46s\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.030351 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.111355 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.192892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" event={"ID":"e7590a55-e3d2-405c-9bcd-cb730502555e","Type":"ContainerDied","Data":"7997ee67d561cf34f6022ea6970b9464f74c299a556c340c6ebf6da600984ca4"} Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.193013 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8ff9c764f-69mxb" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.203932 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" event={"ID":"eb822f1c-1af0-4f20-bfa4-caa95ca22c42","Type":"ContainerDied","Data":"066726fe625daeec65a82cdc787d75152dc2ce99b7e5e36d76f054c70d22bf96"} Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.204063 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68587c85b9-8s9m4" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.227867 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"02716d4728e3df68a334a717adc33b15d61e7b7d0fc4e582388c3db1323e8e1a"} Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.235049 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.242375 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.248213 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.262673 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c5c99497-snwfh"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.272960 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wlj5\" (UniqueName: \"kubernetes.io/projected/e7590a55-e3d2-405c-9bcd-cb730502555e-kube-api-access-8wlj5\") pod \"e7590a55-e3d2-405c-9bcd-cb730502555e\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.273172 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7590a55-e3d2-405c-9bcd-cb730502555e-config\") pod \"e7590a55-e3d2-405c-9bcd-cb730502555e\" (UID: \"e7590a55-e3d2-405c-9bcd-cb730502555e\") " Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.275266 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7590a55-e3d2-405c-9bcd-cb730502555e-config" (OuterVolumeSpecName: "config") pod "e7590a55-e3d2-405c-9bcd-cb730502555e" (UID: "e7590a55-e3d2-405c-9bcd-cb730502555e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.291255 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7590a55-e3d2-405c-9bcd-cb730502555e-kube-api-access-8wlj5" (OuterVolumeSpecName: "kube-api-access-8wlj5") pod "e7590a55-e3d2-405c-9bcd-cb730502555e" (UID: "e7590a55-e3d2-405c-9bcd-cb730502555e"). InnerVolumeSpecName "kube-api-access-8wlj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.326934 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68587c85b9-8s9m4"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.335489 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68587c85b9-8s9m4"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.342221 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.344486 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6545cfffb5-pgxg2"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.375511 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7590a55-e3d2-405c-9bcd-cb730502555e-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.375553 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wlj5\" (UniqueName: \"kubernetes.io/projected/e7590a55-e3d2-405c-9bcd-cb730502555e-kube-api-access-8wlj5\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.547596 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8ff9c764f-69mxb"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.552888 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8ff9c764f-69mxb"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.647082 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f8f5886f-zzspb"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.655782 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.663629 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.670068 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.691667 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:49:26 crc kubenswrapper[4799]: I0216 12:49:26.778914 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 12:49:27 crc kubenswrapper[4799]: I0216 12:49:27.159901 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7590a55-e3d2-405c-9bcd-cb730502555e" path="/var/lib/kubelet/pods/e7590a55-e3d2-405c-9bcd-cb730502555e/volumes" Feb 16 12:49:27 crc kubenswrapper[4799]: I0216 12:49:27.160759 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb822f1c-1af0-4f20-bfa4-caa95ca22c42" path="/var/lib/kubelet/pods/eb822f1c-1af0-4f20-bfa4-caa95ca22c42/volumes" Feb 16 12:49:27 crc kubenswrapper[4799]: I0216 12:49:27.238531 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5b6ff320-8742-454a-9a6e-766db7e2c3a8","Type":"ContainerStarted","Data":"aedeb6c5385dbc17a935533b259f55ee2545020f014965c2ab141edcbb63704f"} Feb 16 12:49:27 crc kubenswrapper[4799]: I0216 12:49:27.395285 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6rnj7"] Feb 16 12:49:27 crc kubenswrapper[4799]: I0216 12:49:27.757656 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 12:49:28 crc kubenswrapper[4799]: W0216 12:49:28.208207 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf68cb9f4_b04b_4b52_92e0_153239877a17.slice/crio-e4731ff4bf400be827f0e7245d07ebdfce95b19e8833b8c40c1964d572b92ab8 WatchSource:0}: Error finding container e4731ff4bf400be827f0e7245d07ebdfce95b19e8833b8c40c1964d572b92ab8: Status 404 returned error can't find the container with id e4731ff4bf400be827f0e7245d07ebdfce95b19e8833b8c40c1964d572b92ab8 Feb 16 12:49:28 crc kubenswrapper[4799]: W0216 12:49:28.209862 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8af3fbd4_c626_4920_915d_0f50d12662b6.slice/crio-a1a9f9017debaf754cc04fd8a11a038e20627d55df5263e2f9aa26a3e4d064bd WatchSource:0}: Error finding container a1a9f9017debaf754cc04fd8a11a038e20627d55df5263e2f9aa26a3e4d064bd: Status 404 returned error can't find the container with id a1a9f9017debaf754cc04fd8a11a038e20627d55df5263e2f9aa26a3e4d064bd Feb 16 12:49:28 crc kubenswrapper[4799]: W0216 12:49:28.213265 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79039adc_b677_4066_8832_95e2589654d5.slice/crio-f72bea844dfc53183f06dd566782c9015d75d430429e196c9b3ceeb9877cf7d4 WatchSource:0}: Error finding container f72bea844dfc53183f06dd566782c9015d75d430429e196c9b3ceeb9877cf7d4: Status 404 returned error can't find the container with id f72bea844dfc53183f06dd566782c9015d75d430429e196c9b3ceeb9877cf7d4 Feb 16 12:49:28 crc kubenswrapper[4799]: W0216 12:49:28.226374 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21585780_9181_47a1_beb1_72cbd9970fb9.slice/crio-a9ad7471776086cd24252c3afc0980204ec162c3106ab2e5488cdb5adb7d6d3f WatchSource:0}: Error finding container a9ad7471776086cd24252c3afc0980204ec162c3106ab2e5488cdb5adb7d6d3f: Status 404 returned error can't find the container with id a9ad7471776086cd24252c3afc0980204ec162c3106ab2e5488cdb5adb7d6d3f Feb 16 12:49:28 crc kubenswrapper[4799]: W0216 12:49:28.235857 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98c6ac1b_2c6b_42f1_831c_e98661c6166d.slice/crio-7dfc3ea490a6aec87c74d9374265462dec182356c09372d2e06fa56583dbd106 WatchSource:0}: Error finding container 7dfc3ea490a6aec87c74d9374265462dec182356c09372d2e06fa56583dbd106: Status 404 returned error can't find the container with id 7dfc3ea490a6aec87c74d9374265462dec182356c09372d2e06fa56583dbd106 Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.250708 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerStarted","Data":"7dfc3ea490a6aec87c74d9374265462dec182356c09372d2e06fa56583dbd106"} Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.253213 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f68cb9f4-b04b-4b52-92e0-153239877a17","Type":"ContainerStarted","Data":"e4731ff4bf400be827f0e7245d07ebdfce95b19e8833b8c40c1964d572b92ab8"} Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.285992 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" event={"ID":"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604","Type":"ContainerStarted","Data":"f6da950515de154ec82f16ffb1dbb30fa876d6d26a82262ed43967f9fb43f3b5"} Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.287140 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.119:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54h55dhc7h567hcbhb4h56ch599h564h58dh566h85h699hb5h564h5b9hdh654h98h94hffh5d9h5cdh6h69h695h99h9dh6dh57ch5cdh694q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdgzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(b93c98d8-9585-4406-8d4f-54ebdb84ee2d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.291086 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.119:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wk49s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(1e3da06f-f1ef-4b8c-963b-0994cde5fab7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.291541 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n54h55dhc7h567hcbhb4h56ch599h564h58dh566h85h699hb5h564h5b9hdh654h98h94hffh5d9h5cdh6h69h695h99h9dh6dh57ch5cdh694q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdgzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(b93c98d8-9585-4406-8d4f-54ebdb84ee2d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.291721 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05acd04d-4502-4380-be32-5997bb43cc76","Type":"ContainerStarted","Data":"11d8b5775a250fb2c3a8ced1a49186a0a3f721dce612c3d9ef329ba7787e5b34"} Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.292330 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.293265 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-sb-0" podUID="b93c98d8-9585-4406-8d4f-54ebdb84ee2d" Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.297707 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8af3fbd4-c626-4920-915d-0f50d12662b6","Type":"ContainerStarted","Data":"a1a9f9017debaf754cc04fd8a11a038e20627d55df5263e2f9aa26a3e4d064bd"} Feb 16 12:49:28 crc kubenswrapper[4799]: W0216 12:49:28.300472 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8a7e69_a5da_4b7f_9ada_6ba2ceec88d7.slice/crio-4f9ccbbb9fb5916983425d4163a2eb7331a5d6b5d7f960fd4256bfd05af583f3 WatchSource:0}: Error finding container 4f9ccbbb9fb5916983425d4163a2eb7331a5d6b5d7f960fd4256bfd05af583f3: Status 404 returned error can't find the container with id 4f9ccbbb9fb5916983425d4163a2eb7331a5d6b5d7f960fd4256bfd05af583f3 Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.302093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06ddc5ff-d6d1-4997-8763-e97603e7df10","Type":"ContainerStarted","Data":"0ab5558064bcc8f7445bceacb9e7a8a7266caff62bae8d6166b9e0461f63d65d"} Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.303563 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" event={"ID":"79039adc-b677-4066-8832-95e2589654d5","Type":"ContainerStarted","Data":"f72bea844dfc53183f06dd566782c9015d75d430429e196c9b3ceeb9877cf7d4"} Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.304892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6rnj7" event={"ID":"46a97d94-f787-4e62-86df-1ee58bdae9ce","Type":"ContainerStarted","Data":"582a6902e60af6bc37281bda62570a9a40ec0aba99e6213b273c146b7cc8cdc9"} Feb 16 12:49:28 crc kubenswrapper[4799]: I0216 12:49:28.306052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" event={"ID":"21585780-9181-47a1-beb1-72cbd9970fb9","Type":"ContainerStarted","Data":"a9ad7471776086cd24252c3afc0980204ec162c3106ab2e5488cdb5adb7d6d3f"} Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.352952 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.119:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n576h68dh677h665h5c7h59hc9h8bh665h666hffh644h5dfh55chf6h644hcchbdh687h5f5h5cdh675hc9h5cfh54chb7h66h9h4h59bh564h56fq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pc5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.356880 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n576h68dh677h665h5c7h59hc9h8bh665h666hffh644h5dfh55chf6h644hcchbdh687h5f5h5cdh675hc9h5cfh54chb7h66h9h4h59bh564h56fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pc5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 12:49:28 crc kubenswrapper[4799]: E0216 12:49:28.358819 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-nb-0" podUID="2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7" Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.316160 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7","Type":"ContainerStarted","Data":"4f9ccbbb9fb5916983425d4163a2eb7331a5d6b5d7f960fd4256bfd05af583f3"} Feb 16 12:49:29 crc kubenswrapper[4799]: E0216 12:49:29.318665 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7" Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.319655 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b93c98d8-9585-4406-8d4f-54ebdb84ee2d","Type":"ContainerStarted","Data":"0625b72237337a6bf75f8b190b70a6cfe7f06d33cae1ad03f998311b267106c2"} Feb 16 12:49:29 crc kubenswrapper[4799]: E0216 12:49:29.323163 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="b93c98d8-9585-4406-8d4f-54ebdb84ee2d" Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.323492 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph" event={"ID":"d0a8e986-71a6-47cc-a34e-ddc323df4af4","Type":"ContainerStarted","Data":"b84714e0f840badba9981cca7cd5ad9185e2f3ec10d798ee37a5751af21ab6bf"} Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.329680 4799 generic.go:334] "Generic (PLEG): container finished" podID="79039adc-b677-4066-8832-95e2589654d5" containerID="ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc" exitCode=0 Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.329732 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" event={"ID":"79039adc-b677-4066-8832-95e2589654d5","Type":"ContainerDied","Data":"ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc"} Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.334819 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06ddc5ff-d6d1-4997-8763-e97603e7df10","Type":"ContainerStarted","Data":"810d5a69270493fad948efba845ac4e566b25d14a8f2f36c31fe1bf6c44f2842"} Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.338316 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e3da06f-f1ef-4b8c-963b-0994cde5fab7","Type":"ContainerStarted","Data":"311f57f1e3aafc08e1fbef6b908f2a0489b7d0ee2b596f1fd1f53eaf5a0966d8"} Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.339849 4799 generic.go:334] "Generic (PLEG): container finished" podID="8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" containerID="a56341f949e0735102cfc964faef8bcc14082122d19d45778c6d65f1f4bcb535" exitCode=0 Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.339927 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" event={"ID":"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604","Type":"ContainerDied","Data":"a56341f949e0735102cfc964faef8bcc14082122d19d45778c6d65f1f4bcb535"} Feb 16 12:49:29 crc kubenswrapper[4799]: E0216 12:49:29.340162 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.341655 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19d52513-0bac-433d-8167-3abd90820fff","Type":"ContainerStarted","Data":"2a346d05b8e2bf21787872f6737acaeb2d3c00910233a8fd3cf4016f8c64c720"} Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.344319 4799 generic.go:334] "Generic (PLEG): container finished" podID="21585780-9181-47a1-beb1-72cbd9970fb9" containerID="c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e" exitCode=0 Feb 16 12:49:29 crc kubenswrapper[4799]: I0216 12:49:29.344356 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" event={"ID":"21585780-9181-47a1-beb1-72cbd9970fb9","Type":"ContainerDied","Data":"c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e"} Feb 16 12:49:30 crc kubenswrapper[4799]: E0216 12:49:30.353785 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" Feb 16 12:49:30 crc kubenswrapper[4799]: E0216 12:49:30.353852 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7" Feb 16 12:49:30 crc kubenswrapper[4799]: E0216 12:49:30.353932 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="b93c98d8-9585-4406-8d4f-54ebdb84ee2d" Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.611714 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.782684 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bn7r\" (UniqueName: \"kubernetes.io/projected/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-kube-api-access-8bn7r\") pod \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.783083 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-config\") pod \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.783303 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc\") pod \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.802061 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-kube-api-access-8bn7r" (OuterVolumeSpecName: "kube-api-access-8bn7r") pod "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" (UID: "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604"). InnerVolumeSpecName "kube-api-access-8bn7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:31 crc kubenswrapper[4799]: E0216 12:49:31.819940 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc podName:8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604 nodeName:}" failed. No retries permitted until 2026-02-16 12:49:32.319681186 +0000 UTC m=+1077.912696520 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc") pod "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" (UID: "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604") : error deleting /var/lib/kubelet/pods/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604/volume-subpaths: remove /var/lib/kubelet/pods/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604/volume-subpaths: no such file or directory Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.820379 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-config" (OuterVolumeSpecName: "config") pod "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" (UID: "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.886858 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bn7r\" (UniqueName: \"kubernetes.io/projected/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-kube-api-access-8bn7r\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:31 crc kubenswrapper[4799]: I0216 12:49:31.886890 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.390633 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" event={"ID":"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604","Type":"ContainerDied","Data":"f6da950515de154ec82f16ffb1dbb30fa876d6d26a82262ed43967f9fb43f3b5"} Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.390684 4799 scope.go:117] "RemoveContainer" containerID="a56341f949e0735102cfc964faef8bcc14082122d19d45778c6d65f1f4bcb535" Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.390825 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6545cfffb5-pgxg2" Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.395282 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc\") pod \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\" (UID: \"8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604\") " Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.395870 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" (UID: "8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.396478 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.745634 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6545cfffb5-pgxg2"] Feb 16 12:49:32 crc kubenswrapper[4799]: I0216 12:49:32.754042 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6545cfffb5-pgxg2"] Feb 16 12:49:33 crc kubenswrapper[4799]: I0216 12:49:33.162731 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" path="/var/lib/kubelet/pods/8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604/volumes" Feb 16 12:49:35 crc kubenswrapper[4799]: I0216 12:49:35.419813 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" event={"ID":"79039adc-b677-4066-8832-95e2589654d5","Type":"ContainerStarted","Data":"88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c"} Feb 16 12:49:35 crc kubenswrapper[4799]: I0216 12:49:35.420578 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:35 crc kubenswrapper[4799]: I0216 12:49:35.422027 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" event={"ID":"21585780-9181-47a1-beb1-72cbd9970fb9","Type":"ContainerStarted","Data":"886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e"} Feb 16 12:49:35 crc kubenswrapper[4799]: I0216 12:49:35.424059 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:35 crc kubenswrapper[4799]: I0216 12:49:35.440278 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" podStartSLOduration=24.256778159 podStartE2EDuration="24.4402585s" podCreationTimestamp="2026-02-16 12:49:11 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.221612884 +0000 UTC m=+1073.814628218" lastFinishedPulling="2026-02-16 12:49:28.405093225 +0000 UTC m=+1073.998108559" observedRunningTime="2026-02-16 12:49:35.43817266 +0000 UTC m=+1081.031188014" watchObservedRunningTime="2026-02-16 12:49:35.4402585 +0000 UTC m=+1081.033273844" Feb 16 12:49:35 crc kubenswrapper[4799]: I0216 12:49:35.460107 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" podStartSLOduration=24.291554577 podStartE2EDuration="24.460090815s" podCreationTimestamp="2026-02-16 12:49:11 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.236481815 +0000 UTC m=+1073.829497159" lastFinishedPulling="2026-02-16 12:49:28.405018063 +0000 UTC m=+1073.998033397" observedRunningTime="2026-02-16 12:49:35.455728199 +0000 UTC m=+1081.048743543" watchObservedRunningTime="2026-02-16 12:49:35.460090815 +0000 UTC m=+1081.053106149" Feb 16 12:49:38 crc kubenswrapper[4799]: I0216 12:49:38.455960 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6rnj7" event={"ID":"46a97d94-f787-4e62-86df-1ee58bdae9ce","Type":"ContainerStarted","Data":"67f2ecc6b972fc2c796a65814e0bb984bc64bcf76c89bd0f77a54c009f68dfc9"} Feb 16 12:49:38 crc kubenswrapper[4799]: I0216 12:49:38.458487 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f68cb9f4-b04b-4b52-92e0-153239877a17","Type":"ContainerStarted","Data":"aa6314a098dca0fe3969332c378f99284095a59eb26031f5332bf4f5597ebab9"} Feb 16 12:49:38 crc kubenswrapper[4799]: I0216 12:49:38.459876 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05acd04d-4502-4380-be32-5997bb43cc76","Type":"ContainerStarted","Data":"ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783"} Feb 16 12:49:39 crc kubenswrapper[4799]: I0216 12:49:39.473520 4799 generic.go:334] "Generic (PLEG): container finished" podID="46a97d94-f787-4e62-86df-1ee58bdae9ce" containerID="67f2ecc6b972fc2c796a65814e0bb984bc64bcf76c89bd0f77a54c009f68dfc9" exitCode=0 Feb 16 12:49:39 crc kubenswrapper[4799]: I0216 12:49:39.473618 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6rnj7" event={"ID":"46a97d94-f787-4e62-86df-1ee58bdae9ce","Type":"ContainerDied","Data":"67f2ecc6b972fc2c796a65814e0bb984bc64bcf76c89bd0f77a54c009f68dfc9"} Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.487942 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph" event={"ID":"d0a8e986-71a6-47cc-a34e-ddc323df4af4","Type":"ContainerStarted","Data":"0f8526f46e38c6236595e1edcdda0ee70ee12cda50649fe6c7167427f8bfafaa"} Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.488311 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wr6ph" Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.490302 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8af3fbd4-c626-4920-915d-0f50d12662b6","Type":"ContainerStarted","Data":"ab78b8d9b5f8e466b857a5f3123961b938a51fbc0fdeca53ac77857645a6278b"} Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.492451 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerStarted","Data":"0677b2ed4f0c4c4fee9ab7c93aa1d391e2c5ae3c940ee43085ef2f90e92099d2"} Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.494807 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5b6ff320-8742-454a-9a6e-766db7e2c3a8","Type":"ContainerStarted","Data":"0e738a235bf1a03a4fb291657c4ea978e5d51909cd447a2ba9108184991c5070"} Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.495097 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.527818 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wr6ph" podStartSLOduration=14.185939435 podStartE2EDuration="20.527793676s" podCreationTimestamp="2026-02-16 12:49:20 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.273587421 +0000 UTC m=+1073.866602755" lastFinishedPulling="2026-02-16 12:49:34.615441662 +0000 UTC m=+1080.208456996" observedRunningTime="2026-02-16 12:49:40.513759439 +0000 UTC m=+1086.106774813" watchObservedRunningTime="2026-02-16 12:49:40.527793676 +0000 UTC m=+1086.120809040" Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.606703 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.61401894 podStartE2EDuration="25.606684874s" podCreationTimestamp="2026-02-16 12:49:15 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.211021267 +0000 UTC m=+1073.804036601" lastFinishedPulling="2026-02-16 12:49:34.203687191 +0000 UTC m=+1079.796702535" observedRunningTime="2026-02-16 12:49:40.59758669 +0000 UTC m=+1086.190602034" watchObservedRunningTime="2026-02-16 12:49:40.606684874 +0000 UTC m=+1086.199700218" Feb 16 12:49:40 crc kubenswrapper[4799]: I0216 12:49:40.665468 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.846211534 podStartE2EDuration="23.665450478s" podCreationTimestamp="2026-02-16 12:49:17 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.224512128 +0000 UTC m=+1073.817527472" lastFinishedPulling="2026-02-16 12:49:35.043751072 +0000 UTC m=+1080.636766416" observedRunningTime="2026-02-16 12:49:40.66138181 +0000 UTC m=+1086.254397154" watchObservedRunningTime="2026-02-16 12:49:40.665450478 +0000 UTC m=+1086.258465822" Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.504047 4799 generic.go:334] "Generic (PLEG): container finished" podID="19d52513-0bac-433d-8167-3abd90820fff" containerID="2a346d05b8e2bf21787872f6737acaeb2d3c00910233a8fd3cf4016f8c64c720" exitCode=0 Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.504165 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19d52513-0bac-433d-8167-3abd90820fff","Type":"ContainerDied","Data":"2a346d05b8e2bf21787872f6737acaeb2d3c00910233a8fd3cf4016f8c64c720"} Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.507850 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6rnj7" event={"ID":"46a97d94-f787-4e62-86df-1ee58bdae9ce","Type":"ContainerStarted","Data":"12861390ea2864ad55f0bb1fa4a2921878c6b5e2b1af59be9631de0d81ab46b4"} Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.507921 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6rnj7" event={"ID":"46a97d94-f787-4e62-86df-1ee58bdae9ce","Type":"ContainerStarted","Data":"dbad06b37ce0fd078637f717872bdf8656b3b3b4aeb538e9341f63820d9a878b"} Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.508180 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.508299 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.509499 4799 generic.go:334] "Generic (PLEG): container finished" podID="06ddc5ff-d6d1-4997-8763-e97603e7df10" containerID="810d5a69270493fad948efba845ac4e566b25d14a8f2f36c31fe1bf6c44f2842" exitCode=0 Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.510194 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06ddc5ff-d6d1-4997-8763-e97603e7df10","Type":"ContainerDied","Data":"810d5a69270493fad948efba845ac4e566b25d14a8f2f36c31fe1bf6c44f2842"} Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.636341 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6rnj7" podStartSLOduration=14.674968817 podStartE2EDuration="20.636324623s" podCreationTimestamp="2026-02-16 12:49:21 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.242338225 +0000 UTC m=+1073.835353579" lastFinishedPulling="2026-02-16 12:49:34.203694051 +0000 UTC m=+1079.796709385" observedRunningTime="2026-02-16 12:49:41.636024394 +0000 UTC m=+1087.229039728" watchObservedRunningTime="2026-02-16 12:49:41.636324623 +0000 UTC m=+1087.229339957" Feb 16 12:49:41 crc kubenswrapper[4799]: I0216 12:49:41.762523 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.071342 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.117293 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c5c99497-snwfh"] Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.528883 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06ddc5ff-d6d1-4997-8763-e97603e7df10","Type":"ContainerStarted","Data":"450972fe8395792c76e932be00434379e5d3c6debdb8359fc0d9c2994483f97e"} Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.533388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19d52513-0bac-433d-8167-3abd90820fff","Type":"ContainerStarted","Data":"83b8c725fe03de315f12f28402eea75f6fd201316acc11deac64e5b5756e81f3"} Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.533628 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" podUID="79039adc-b677-4066-8832-95e2589654d5" containerName="dnsmasq-dns" containerID="cri-o://88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c" gracePeriod=10 Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.557478 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.373771298 podStartE2EDuration="28.557462165s" podCreationTimestamp="2026-02-16 12:49:14 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.222784878 +0000 UTC m=+1073.815800212" lastFinishedPulling="2026-02-16 12:49:28.406475745 +0000 UTC m=+1073.999491079" observedRunningTime="2026-02-16 12:49:42.551380118 +0000 UTC m=+1088.144395452" watchObservedRunningTime="2026-02-16 12:49:42.557462165 +0000 UTC m=+1088.150477499" Feb 16 12:49:42 crc kubenswrapper[4799]: I0216 12:49:42.582385 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.306250536 podStartE2EDuration="30.582365177s" podCreationTimestamp="2026-02-16 12:49:12 +0000 UTC" firstStartedPulling="2026-02-16 12:49:19.129072327 +0000 UTC m=+1064.722087671" lastFinishedPulling="2026-02-16 12:49:28.405186978 +0000 UTC m=+1073.998202312" observedRunningTime="2026-02-16 12:49:42.576636701 +0000 UTC m=+1088.169652035" watchObservedRunningTime="2026-02-16 12:49:42.582365177 +0000 UTC m=+1088.175380511" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.006600 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.191369 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-config\") pod \"79039adc-b677-4066-8832-95e2589654d5\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.192830 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtnvd\" (UniqueName: \"kubernetes.io/projected/79039adc-b677-4066-8832-95e2589654d5-kube-api-access-wtnvd\") pod \"79039adc-b677-4066-8832-95e2589654d5\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.193037 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-dns-svc\") pod \"79039adc-b677-4066-8832-95e2589654d5\" (UID: \"79039adc-b677-4066-8832-95e2589654d5\") " Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.202077 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79039adc-b677-4066-8832-95e2589654d5-kube-api-access-wtnvd" (OuterVolumeSpecName: "kube-api-access-wtnvd") pod "79039adc-b677-4066-8832-95e2589654d5" (UID: "79039adc-b677-4066-8832-95e2589654d5"). InnerVolumeSpecName "kube-api-access-wtnvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.233097 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-config" (OuterVolumeSpecName: "config") pod "79039adc-b677-4066-8832-95e2589654d5" (UID: "79039adc-b677-4066-8832-95e2589654d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.233976 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79039adc-b677-4066-8832-95e2589654d5" (UID: "79039adc-b677-4066-8832-95e2589654d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.294882 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.294912 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79039adc-b677-4066-8832-95e2589654d5-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.294922 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtnvd\" (UniqueName: \"kubernetes.io/projected/79039adc-b677-4066-8832-95e2589654d5-kube-api-access-wtnvd\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.548954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b93c98d8-9585-4406-8d4f-54ebdb84ee2d","Type":"ContainerStarted","Data":"7ba1c16797b8bcaedf492fabb7ea4013cd3ea7aa2e22bc35b951b8fceb9e3612"} Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.551263 4799 generic.go:334] "Generic (PLEG): container finished" podID="79039adc-b677-4066-8832-95e2589654d5" containerID="88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c" exitCode=0 Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.551391 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" event={"ID":"79039adc-b677-4066-8832-95e2589654d5","Type":"ContainerDied","Data":"88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c"} Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.552034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" event={"ID":"79039adc-b677-4066-8832-95e2589654d5","Type":"ContainerDied","Data":"f72bea844dfc53183f06dd566782c9015d75d430429e196c9b3ceeb9877cf7d4"} Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.551417 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c5c99497-snwfh" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.552193 4799 scope.go:117] "RemoveContainer" containerID="88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.592717 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c5c99497-snwfh"] Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.593019 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78c5c99497-snwfh"] Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.690374 4799 scope.go:117] "RemoveContainer" containerID="ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.709441 4799 scope.go:117] "RemoveContainer" containerID="88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c" Feb 16 12:49:43 crc kubenswrapper[4799]: E0216 12:49:43.709929 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c\": container with ID starting with 88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c not found: ID does not exist" containerID="88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.710062 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c"} err="failed to get container status \"88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c\": rpc error: code = NotFound desc = could not find container \"88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c\": container with ID starting with 88ddfd6c48fa1269a107e3ccf270dae58ac08da5baba39ceed39b51433c69d4c not found: ID does not exist" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.711239 4799 scope.go:117] "RemoveContainer" containerID="ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc" Feb 16 12:49:43 crc kubenswrapper[4799]: E0216 12:49:43.711816 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc\": container with ID starting with ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc not found: ID does not exist" containerID="ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc" Feb 16 12:49:43 crc kubenswrapper[4799]: I0216 12:49:43.711848 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc"} err="failed to get container status \"ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc\": rpc error: code = NotFound desc = could not find container \"ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc\": container with ID starting with ff119edf47dd927bd6e38c7295935b8bc1272d3ed61f8c68513e3a16b1fa31bc not found: ID does not exist" Feb 16 12:49:44 crc kubenswrapper[4799]: I0216 12:49:44.330990 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 12:49:44 crc kubenswrapper[4799]: I0216 12:49:44.331480 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 12:49:44 crc kubenswrapper[4799]: I0216 12:49:44.563046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7","Type":"ContainerStarted","Data":"ab226b928dcc83bf5c82855858374c531ca2a84bd82efea670aefe1397c6ab82"} Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.161854 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79039adc-b677-4066-8832-95e2589654d5" path="/var/lib/kubelet/pods/79039adc-b677-4066-8832-95e2589654d5/volumes" Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.573624 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7","Type":"ContainerStarted","Data":"79ca88577655ac9c2002896d4bd9af745f54d1d2aea6682e72c2f0d005700a40"} Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.575919 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b93c98d8-9585-4406-8d4f-54ebdb84ee2d","Type":"ContainerStarted","Data":"aae2f6abe282c581c0d3b3db697a9f869b06190f401a6884b2fe63aaa5beb493"} Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.594630 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.017751171 podStartE2EDuration="25.59460832s" podCreationTimestamp="2026-02-16 12:49:20 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.352804958 +0000 UTC m=+1073.945820292" lastFinishedPulling="2026-02-16 12:49:44.929662107 +0000 UTC m=+1090.522677441" observedRunningTime="2026-02-16 12:49:45.590770829 +0000 UTC m=+1091.183786193" watchObservedRunningTime="2026-02-16 12:49:45.59460832 +0000 UTC m=+1091.187623654" Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.613454 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.966239217 podStartE2EDuration="21.613431256s" podCreationTimestamp="2026-02-16 12:49:24 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.286899807 +0000 UTC m=+1073.879915151" lastFinishedPulling="2026-02-16 12:49:44.934091856 +0000 UTC m=+1090.527107190" observedRunningTime="2026-02-16 12:49:45.611295574 +0000 UTC m=+1091.204310958" watchObservedRunningTime="2026-02-16 12:49:45.613431256 +0000 UTC m=+1091.206446630" Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.637306 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.932040 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:45 crc kubenswrapper[4799]: I0216 12:49:45.932108 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:46 crc kubenswrapper[4799]: I0216 12:49:46.022248 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 12:49:46 crc kubenswrapper[4799]: I0216 12:49:46.030477 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:46 crc kubenswrapper[4799]: I0216 12:49:46.584272 4799 generic.go:334] "Generic (PLEG): container finished" podID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerID="0677b2ed4f0c4c4fee9ab7c93aa1d391e2c5ae3c940ee43085ef2f90e92099d2" exitCode=0 Feb 16 12:49:46 crc kubenswrapper[4799]: I0216 12:49:46.584357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerDied","Data":"0677b2ed4f0c4c4fee9ab7c93aa1d391e2c5ae3c940ee43085ef2f90e92099d2"} Feb 16 12:49:46 crc kubenswrapper[4799]: I0216 12:49:46.586364 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e3da06f-f1ef-4b8c-963b-0994cde5fab7","Type":"ContainerStarted","Data":"1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096"} Feb 16 12:49:46 crc kubenswrapper[4799]: I0216 12:49:46.641856 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.031535 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.069981 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.326453 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.451255 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.632391 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.929550 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c47885c-jd8mc"] Feb 16 12:49:47 crc kubenswrapper[4799]: E0216 12:49:47.929914 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" containerName="init" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.929929 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" containerName="init" Feb 16 12:49:47 crc kubenswrapper[4799]: E0216 12:49:47.929962 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79039adc-b677-4066-8832-95e2589654d5" containerName="dnsmasq-dns" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.929968 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="79039adc-b677-4066-8832-95e2589654d5" containerName="dnsmasq-dns" Feb 16 12:49:47 crc kubenswrapper[4799]: E0216 12:49:47.929983 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79039adc-b677-4066-8832-95e2589654d5" containerName="init" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.929991 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="79039adc-b677-4066-8832-95e2589654d5" containerName="init" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.930144 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="79039adc-b677-4066-8832-95e2589654d5" containerName="dnsmasq-dns" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.930170 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9dbdf7-27b4-4bdf-8e6e-6b0a62377604" containerName="init" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.931009 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.937946 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.942325 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-cbnmk"] Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.943812 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.946652 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.958946 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c47885c-jd8mc"] Feb 16 12:49:47 crc kubenswrapper[4799]: I0216 12:49:47.969646 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cbnmk"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.003638 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-dns-svc\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.003681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.003739 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pnt\" (UniqueName: \"kubernetes.io/projected/aeb2bead-5a19-4828-9940-6836514e80cf-kube-api-access-q7pnt\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.003757 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-config\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.104883 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-ovs-rundir\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.104939 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-ovn-rundir\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.104962 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-config\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.104992 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-dns-svc\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105043 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105133 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105171 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzmx\" (UniqueName: \"kubernetes.io/projected/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-kube-api-access-9wzmx\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pnt\" (UniqueName: \"kubernetes.io/projected/aeb2bead-5a19-4828-9940-6836514e80cf-kube-api-access-q7pnt\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105229 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-config\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105537 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-combined-ca-bundle\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.105979 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.106009 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-dns-svc\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.106288 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-config\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.136525 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pnt\" (UniqueName: \"kubernetes.io/projected/aeb2bead-5a19-4828-9940-6836514e80cf-kube-api-access-q7pnt\") pod \"dnsmasq-dns-6f8c47885c-jd8mc\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.142262 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.166865 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.206628 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-ovs-rundir\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.206668 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-ovn-rundir\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.206689 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-config\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.206747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.206778 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wzmx\" (UniqueName: \"kubernetes.io/projected/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-kube-api-access-9wzmx\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.206819 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-combined-ca-bundle\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.208568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-config\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.208806 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-ovs-rundir\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.208868 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-ovn-rundir\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.212224 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-combined-ca-bundle\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.212531 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c47885c-jd8mc"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.213209 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.231333 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.239722 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wzmx\" (UniqueName: \"kubernetes.io/projected/c4e49631-ab2b-49a4-befb-ccc2df5a47c4-kube-api-access-9wzmx\") pod \"ovn-controller-metrics-cbnmk\" (UID: \"c4e49631-ab2b-49a4-befb-ccc2df5a47c4\") " pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.269943 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cbnmk" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.292326 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54d955c87-74jrz"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.294574 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.298561 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.334262 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d955c87-74jrz"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.375795 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d955c87-74jrz"] Feb 16 12:49:48 crc kubenswrapper[4799]: E0216 12:49:48.376410 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-vhgbq ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-54d955c87-74jrz" podUID="bd68cc24-4fcd-4676-aa45-a84f04226027" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.410555 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-dns-svc\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.410608 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-config\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.410703 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhgbq\" (UniqueName: \"kubernetes.io/projected/bd68cc24-4fcd-4676-aa45-a84f04226027-kube-api-access-vhgbq\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.410747 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-sb\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.410774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-nb\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.422569 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f789c7d5f-sxvnw"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.430010 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.452736 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f789c7d5f-sxvnw"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.491810 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-2hqd8"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.492975 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.503992 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-b692-account-create-update-mmrg5"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.505635 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.507530 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512162 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-config\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512238 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-config\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512270 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhgbq\" (UniqueName: \"kubernetes.io/projected/bd68cc24-4fcd-4676-aa45-a84f04226027-kube-api-access-vhgbq\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512286 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-nb\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512306 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-sb\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512324 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq7pd\" (UniqueName: \"kubernetes.io/projected/cf16668a-2109-479a-a133-77530f391656-kube-api-access-vq7pd\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-sb\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512370 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-nb\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-dns-svc\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.512450 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-dns-svc\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.513556 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-dns-svc\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.514202 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-sb\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.514846 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-nb\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.516891 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-2hqd8"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.517283 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-config\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.526984 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-b692-account-create-update-mmrg5"] Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.549964 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhgbq\" (UniqueName: \"kubernetes.io/projected/bd68cc24-4fcd-4676-aa45-a84f04226027-kube-api-access-vhgbq\") pod \"dnsmasq-dns-54d955c87-74jrz\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.623766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-config\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.623846 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-nb\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.623892 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-sb\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.623909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq7pd\" (UniqueName: \"kubernetes.io/projected/cf16668a-2109-479a-a133-77530f391656-kube-api-access-vq7pd\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.623933 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5733f514-65f3-49c8-a40b-586eae0eb996-operator-scripts\") pod \"watcher-db-create-2hqd8\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.623977 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9tng\" (UniqueName: \"kubernetes.io/projected/6ad67827-6ed7-48ce-842d-413a84f9171d-kube-api-access-d9tng\") pod \"watcher-b692-account-create-update-mmrg5\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.624002 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-dns-svc\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.624072 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xv5c\" (UniqueName: \"kubernetes.io/projected/5733f514-65f3-49c8-a40b-586eae0eb996-kube-api-access-2xv5c\") pod \"watcher-db-create-2hqd8\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.624094 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ad67827-6ed7-48ce-842d-413a84f9171d-operator-scripts\") pod \"watcher-b692-account-create-update-mmrg5\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.626132 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-config\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.626684 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-nb\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.626842 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.627625 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-dns-svc\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.627637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-sb\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.652315 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.652902 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq7pd\" (UniqueName: \"kubernetes.io/projected/cf16668a-2109-479a-a133-77530f391656-kube-api-access-vq7pd\") pod \"dnsmasq-dns-5f789c7d5f-sxvnw\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.723676 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.729695 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-dns-svc\") pod \"bd68cc24-4fcd-4676-aa45-a84f04226027\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.729778 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhgbq\" (UniqueName: \"kubernetes.io/projected/bd68cc24-4fcd-4676-aa45-a84f04226027-kube-api-access-vhgbq\") pod \"bd68cc24-4fcd-4676-aa45-a84f04226027\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.729824 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-config\") pod \"bd68cc24-4fcd-4676-aa45-a84f04226027\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.729936 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-sb\") pod \"bd68cc24-4fcd-4676-aa45-a84f04226027\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.729965 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-nb\") pod \"bd68cc24-4fcd-4676-aa45-a84f04226027\" (UID: \"bd68cc24-4fcd-4676-aa45-a84f04226027\") " Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730114 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd68cc24-4fcd-4676-aa45-a84f04226027" (UID: "bd68cc24-4fcd-4676-aa45-a84f04226027"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730282 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5733f514-65f3-49c8-a40b-586eae0eb996-operator-scripts\") pod \"watcher-db-create-2hqd8\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730332 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9tng\" (UniqueName: \"kubernetes.io/projected/6ad67827-6ed7-48ce-842d-413a84f9171d-kube-api-access-d9tng\") pod \"watcher-b692-account-create-update-mmrg5\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xv5c\" (UniqueName: \"kubernetes.io/projected/5733f514-65f3-49c8-a40b-586eae0eb996-kube-api-access-2xv5c\") pod \"watcher-db-create-2hqd8\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730440 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-config" (OuterVolumeSpecName: "config") pod "bd68cc24-4fcd-4676-aa45-a84f04226027" (UID: "bd68cc24-4fcd-4676-aa45-a84f04226027"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ad67827-6ed7-48ce-842d-413a84f9171d-operator-scripts\") pod \"watcher-b692-account-create-update-mmrg5\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730536 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.730553 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.731224 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd68cc24-4fcd-4676-aa45-a84f04226027" (UID: "bd68cc24-4fcd-4676-aa45-a84f04226027"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.731529 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd68cc24-4fcd-4676-aa45-a84f04226027" (UID: "bd68cc24-4fcd-4676-aa45-a84f04226027"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.733916 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5733f514-65f3-49c8-a40b-586eae0eb996-operator-scripts\") pod \"watcher-db-create-2hqd8\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.734011 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd68cc24-4fcd-4676-aa45-a84f04226027-kube-api-access-vhgbq" (OuterVolumeSpecName: "kube-api-access-vhgbq") pod "bd68cc24-4fcd-4676-aa45-a84f04226027" (UID: "bd68cc24-4fcd-4676-aa45-a84f04226027"). InnerVolumeSpecName "kube-api-access-vhgbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.746912 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ad67827-6ed7-48ce-842d-413a84f9171d-operator-scripts\") pod \"watcher-b692-account-create-update-mmrg5\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.748009 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9tng\" (UniqueName: \"kubernetes.io/projected/6ad67827-6ed7-48ce-842d-413a84f9171d-kube-api-access-d9tng\") pod \"watcher-b692-account-create-update-mmrg5\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.751144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xv5c\" (UniqueName: \"kubernetes.io/projected/5733f514-65f3-49c8-a40b-586eae0eb996-kube-api-access-2xv5c\") pod \"watcher-db-create-2hqd8\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.805259 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.832600 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhgbq\" (UniqueName: \"kubernetes.io/projected/bd68cc24-4fcd-4676-aa45-a84f04226027-kube-api-access-vhgbq\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.832644 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.832657 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd68cc24-4fcd-4676-aa45-a84f04226027-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.847189 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.869649 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:48 crc kubenswrapper[4799]: I0216 12:49:48.873026 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c47885c-jd8mc"] Feb 16 12:49:48 crc kubenswrapper[4799]: W0216 12:49:48.881305 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeb2bead_5a19_4828_9940_6836514e80cf.slice/crio-99d853dfdc289e530e03f12756ed43a5a98f2bfa73c998c5a88d7b689be77e66 WatchSource:0}: Error finding container 99d853dfdc289e530e03f12756ed43a5a98f2bfa73c998c5a88d7b689be77e66: Status 404 returned error can't find the container with id 99d853dfdc289e530e03f12756ed43a5a98f2bfa73c998c5a88d7b689be77e66 Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.010709 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cbnmk"] Feb 16 12:49:49 crc kubenswrapper[4799]: W0216 12:49:49.022929 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4e49631_ab2b_49a4_befb_ccc2df5a47c4.slice/crio-c5edfa0659264a20300631ecb8d91df488389e04837a443b20e75942b3afcff5 WatchSource:0}: Error finding container c5edfa0659264a20300631ecb8d91df488389e04837a443b20e75942b3afcff5: Status 404 returned error can't find the container with id c5edfa0659264a20300631ecb8d91df488389e04837a443b20e75942b3afcff5 Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.263618 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.457981 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.551047 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.571666 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.575400 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.575594 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-plsws" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.575694 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.575791 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.587475 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 12:49:49 crc kubenswrapper[4799]: W0216 12:49:49.634856 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ad67827_6ed7_48ce_842d_413a84f9171d.slice/crio-c4ba167c74526c84d36f1e41f5c34eab951bf694a9d032ca7d4c0a6cfa211c89 WatchSource:0}: Error finding container c4ba167c74526c84d36f1e41f5c34eab951bf694a9d032ca7d4c0a6cfa211c89: Status 404 returned error can't find the container with id c4ba167c74526c84d36f1e41f5c34eab951bf694a9d032ca7d4c0a6cfa211c89 Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.647113 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qd6h\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-kube-api-access-6qd6h\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.647198 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bfd980-54e7-4b29-a896-dc1cc52291fd-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.648439 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/95bfd980-54e7-4b29-a896-dc1cc52291fd-lock\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.648478 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.648516 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.648546 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/95bfd980-54e7-4b29-a896-dc1cc52291fd-cache\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.647962 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-b692-account-create-update-mmrg5"] Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.649421 4799 generic.go:334] "Generic (PLEG): container finished" podID="aeb2bead-5a19-4828-9940-6836514e80cf" containerID="e20952914294610ca3958e4c92c4d1ee20bac520c6daa4820df4b555a91171b4" exitCode=0 Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.649508 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" event={"ID":"aeb2bead-5a19-4828-9940-6836514e80cf","Type":"ContainerDied","Data":"e20952914294610ca3958e4c92c4d1ee20bac520c6daa4820df4b555a91171b4"} Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.649526 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" event={"ID":"aeb2bead-5a19-4828-9940-6836514e80cf","Type":"ContainerStarted","Data":"99d853dfdc289e530e03f12756ed43a5a98f2bfa73c998c5a88d7b689be77e66"} Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.661051 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cbnmk" event={"ID":"c4e49631-ab2b-49a4-befb-ccc2df5a47c4","Type":"ContainerStarted","Data":"62fb47049c894b43ca84b4dccb5cfcdad22c3d2183852a8ca1978dc471cdabeb"} Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.661093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cbnmk" event={"ID":"c4e49631-ab2b-49a4-befb-ccc2df5a47c4","Type":"ContainerStarted","Data":"c5edfa0659264a20300631ecb8d91df488389e04837a443b20e75942b3afcff5"} Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.661158 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d955c87-74jrz" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.691192 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-2hqd8"] Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.704728 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-cbnmk" podStartSLOduration=2.70471336 podStartE2EDuration="2.70471336s" podCreationTimestamp="2026-02-16 12:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:49:49.698887241 +0000 UTC m=+1095.291902575" watchObservedRunningTime="2026-02-16 12:49:49.70471336 +0000 UTC m=+1095.297728694" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.750653 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qd6h\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-kube-api-access-6qd6h\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.750719 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bfd980-54e7-4b29-a896-dc1cc52291fd-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.750832 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/95bfd980-54e7-4b29-a896-dc1cc52291fd-lock\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.750870 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.750902 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.750921 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/95bfd980-54e7-4b29-a896-dc1cc52291fd-cache\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.751461 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/95bfd980-54e7-4b29-a896-dc1cc52291fd-cache\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.753056 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/95bfd980-54e7-4b29-a896-dc1cc52291fd-lock\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.753855 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: E0216 12:49:49.754706 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 12:49:49 crc kubenswrapper[4799]: E0216 12:49:49.754741 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 12:49:49 crc kubenswrapper[4799]: E0216 12:49:49.754808 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift podName:95bfd980-54e7-4b29-a896-dc1cc52291fd nodeName:}" failed. No retries permitted until 2026-02-16 12:49:50.254787702 +0000 UTC m=+1095.847803116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift") pod "swift-storage-0" (UID: "95bfd980-54e7-4b29-a896-dc1cc52291fd") : configmap "swift-ring-files" not found Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.761414 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d955c87-74jrz"] Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.768045 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54d955c87-74jrz"] Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.774833 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f789c7d5f-sxvnw"] Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.780069 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bfd980-54e7-4b29-a896-dc1cc52291fd-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.782192 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qd6h\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-kube-api-access-6qd6h\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.796619 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 12:49:49 crc kubenswrapper[4799]: I0216 12:49:49.798401 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:49 crc kubenswrapper[4799]: W0216 12:49:49.835674 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf16668a_2109_479a_a133_77530f391656.slice/crio-ff2a47edc623d044099fba4576e848c93223ffb9a42ee8ff23cd2a111236411b WatchSource:0}: Error finding container ff2a47edc623d044099fba4576e848c93223ffb9a42ee8ff23cd2a111236411b: Status 404 returned error can't find the container with id ff2a47edc623d044099fba4576e848c93223ffb9a42ee8ff23cd2a111236411b Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.098498 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.100624 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.101350 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.103392 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.103620 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.103734 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.103934 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-j852d" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.135453 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.259988 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-dns-svc\") pod \"aeb2bead-5a19-4828-9940-6836514e80cf\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260089 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7pnt\" (UniqueName: \"kubernetes.io/projected/aeb2bead-5a19-4828-9940-6836514e80cf-kube-api-access-q7pnt\") pod \"aeb2bead-5a19-4828-9940-6836514e80cf\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260143 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-config\") pod \"aeb2bead-5a19-4828-9940-6836514e80cf\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260235 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-ovsdbserver-sb\") pod \"aeb2bead-5a19-4828-9940-6836514e80cf\" (UID: \"aeb2bead-5a19-4828-9940-6836514e80cf\") " Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260551 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260586 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68382ea2-c66d-4ea6-be55-f77490a81898-scripts\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260606 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260638 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260731 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68382ea2-c66d-4ea6-be55-f77490a81898-config\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260752 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vgb\" (UniqueName: \"kubernetes.io/projected/68382ea2-c66d-4ea6-be55-f77490a81898-kube-api-access-f8vgb\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68382ea2-c66d-4ea6-be55-f77490a81898-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.260812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: E0216 12:49:50.261585 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 12:49:50 crc kubenswrapper[4799]: E0216 12:49:50.261612 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 12:49:50 crc kubenswrapper[4799]: E0216 12:49:50.261658 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift podName:95bfd980-54e7-4b29-a896-dc1cc52291fd nodeName:}" failed. No retries permitted until 2026-02-16 12:49:51.26163994 +0000 UTC m=+1096.854655274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift") pod "swift-storage-0" (UID: "95bfd980-54e7-4b29-a896-dc1cc52291fd") : configmap "swift-ring-files" not found Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.269444 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb2bead-5a19-4828-9940-6836514e80cf-kube-api-access-q7pnt" (OuterVolumeSpecName: "kube-api-access-q7pnt") pod "aeb2bead-5a19-4828-9940-6836514e80cf" (UID: "aeb2bead-5a19-4828-9940-6836514e80cf"). InnerVolumeSpecName "kube-api-access-q7pnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.281108 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aeb2bead-5a19-4828-9940-6836514e80cf" (UID: "aeb2bead-5a19-4828-9940-6836514e80cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.304417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-config" (OuterVolumeSpecName: "config") pod "aeb2bead-5a19-4828-9940-6836514e80cf" (UID: "aeb2bead-5a19-4828-9940-6836514e80cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.305231 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aeb2bead-5a19-4828-9940-6836514e80cf" (UID: "aeb2bead-5a19-4828-9940-6836514e80cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362686 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362750 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68382ea2-c66d-4ea6-be55-f77490a81898-scripts\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362771 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68382ea2-c66d-4ea6-be55-f77490a81898-config\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362849 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vgb\" (UniqueName: \"kubernetes.io/projected/68382ea2-c66d-4ea6-be55-f77490a81898-kube-api-access-f8vgb\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362864 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68382ea2-c66d-4ea6-be55-f77490a81898-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362882 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362951 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7pnt\" (UniqueName: \"kubernetes.io/projected/aeb2bead-5a19-4828-9940-6836514e80cf-kube-api-access-q7pnt\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362962 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362972 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.362981 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeb2bead-5a19-4828-9940-6836514e80cf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.364694 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68382ea2-c66d-4ea6-be55-f77490a81898-config\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.364817 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68382ea2-c66d-4ea6-be55-f77490a81898-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.365014 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68382ea2-c66d-4ea6-be55-f77490a81898-scripts\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.368886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.371442 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.372968 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68382ea2-c66d-4ea6-be55-f77490a81898-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.383694 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vgb\" (UniqueName: \"kubernetes.io/projected/68382ea2-c66d-4ea6-be55-f77490a81898-kube-api-access-f8vgb\") pod \"ovn-northd-0\" (UID: \"68382ea2-c66d-4ea6-be55-f77490a81898\") " pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.424957 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.669480 4799 generic.go:334] "Generic (PLEG): container finished" podID="6ad67827-6ed7-48ce-842d-413a84f9171d" containerID="107b6b0e33708aff2a2b76daf80c82607b7946be11c7ba535b5e2249ca2ea614" exitCode=0 Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.669676 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-b692-account-create-update-mmrg5" event={"ID":"6ad67827-6ed7-48ce-842d-413a84f9171d","Type":"ContainerDied","Data":"107b6b0e33708aff2a2b76daf80c82607b7946be11c7ba535b5e2249ca2ea614"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.669816 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-b692-account-create-update-mmrg5" event={"ID":"6ad67827-6ed7-48ce-842d-413a84f9171d","Type":"ContainerStarted","Data":"c4ba167c74526c84d36f1e41f5c34eab951bf694a9d032ca7d4c0a6cfa211c89"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.671756 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.671793 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c47885c-jd8mc" event={"ID":"aeb2bead-5a19-4828-9940-6836514e80cf","Type":"ContainerDied","Data":"99d853dfdc289e530e03f12756ed43a5a98f2bfa73c998c5a88d7b689be77e66"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.671859 4799 scope.go:117] "RemoveContainer" containerID="e20952914294610ca3958e4c92c4d1ee20bac520c6daa4820df4b555a91171b4" Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.673668 4799 generic.go:334] "Generic (PLEG): container finished" podID="5733f514-65f3-49c8-a40b-586eae0eb996" containerID="bffd513b46ccab39835a381969309e0e3110475eb99e184756a9c39f61b14a9d" exitCode=0 Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.673729 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-2hqd8" event={"ID":"5733f514-65f3-49c8-a40b-586eae0eb996","Type":"ContainerDied","Data":"bffd513b46ccab39835a381969309e0e3110475eb99e184756a9c39f61b14a9d"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.673747 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-2hqd8" event={"ID":"5733f514-65f3-49c8-a40b-586eae0eb996","Type":"ContainerStarted","Data":"e6ec06c9a0d72bee4fe6e0e279d196f326bd6eae7e0d6ae944a7ccc9ffedf6e4"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.691385 4799 generic.go:334] "Generic (PLEG): container finished" podID="cf16668a-2109-479a-a133-77530f391656" containerID="906769aedc4987ca62b5057b97c1bad9332b660f02bedd4d8defcb3e2caeccfd" exitCode=0 Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.691669 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" event={"ID":"cf16668a-2109-479a-a133-77530f391656","Type":"ContainerDied","Data":"906769aedc4987ca62b5057b97c1bad9332b660f02bedd4d8defcb3e2caeccfd"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.691726 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" event={"ID":"cf16668a-2109-479a-a133-77530f391656","Type":"ContainerStarted","Data":"ff2a47edc623d044099fba4576e848c93223ffb9a42ee8ff23cd2a111236411b"} Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.769819 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c47885c-jd8mc"] Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.783690 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c47885c-jd8mc"] Feb 16 12:49:50 crc kubenswrapper[4799]: I0216 12:49:50.913296 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 12:49:50 crc kubenswrapper[4799]: W0216 12:49:50.919948 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68382ea2_c66d_4ea6_be55_f77490a81898.slice/crio-79ddbb643516de9d77b35118fee1c30a0effde4f81b66ef5d9a74bfd8f9e9daa WatchSource:0}: Error finding container 79ddbb643516de9d77b35118fee1c30a0effde4f81b66ef5d9a74bfd8f9e9daa: Status 404 returned error can't find the container with id 79ddbb643516de9d77b35118fee1c30a0effde4f81b66ef5d9a74bfd8f9e9daa Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.164755 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb2bead-5a19-4828-9940-6836514e80cf" path="/var/lib/kubelet/pods/aeb2bead-5a19-4828-9940-6836514e80cf/volumes" Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.165795 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd68cc24-4fcd-4676-aa45-a84f04226027" path="/var/lib/kubelet/pods/bd68cc24-4fcd-4676-aa45-a84f04226027/volumes" Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.278762 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:51 crc kubenswrapper[4799]: E0216 12:49:51.278951 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 12:49:51 crc kubenswrapper[4799]: E0216 12:49:51.278988 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 12:49:51 crc kubenswrapper[4799]: E0216 12:49:51.279050 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift podName:95bfd980-54e7-4b29-a896-dc1cc52291fd nodeName:}" failed. No retries permitted until 2026-02-16 12:49:53.279024964 +0000 UTC m=+1098.872040298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift") pod "swift-storage-0" (UID: "95bfd980-54e7-4b29-a896-dc1cc52291fd") : configmap "swift-ring-files" not found Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.717325 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68382ea2-c66d-4ea6-be55-f77490a81898","Type":"ContainerStarted","Data":"bacd6c4268e801aa24c2386b900018e52c353f290b90a9a5a6d8acbd7cf50f67"} Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.717900 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68382ea2-c66d-4ea6-be55-f77490a81898","Type":"ContainerStarted","Data":"79ddbb643516de9d77b35118fee1c30a0effde4f81b66ef5d9a74bfd8f9e9daa"} Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.725045 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" event={"ID":"cf16668a-2109-479a-a133-77530f391656","Type":"ContainerStarted","Data":"a602c4b86160a263e1d1ca0d1ffdb9bb89978558e649511ed87e0fd8e5be8c30"} Feb 16 12:49:51 crc kubenswrapper[4799]: I0216 12:49:51.725546 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.054865 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.076529 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" podStartSLOduration=4.076506801 podStartE2EDuration="4.076506801s" podCreationTimestamp="2026-02-16 12:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:49:51.750570319 +0000 UTC m=+1097.343585653" watchObservedRunningTime="2026-02-16 12:49:52.076506801 +0000 UTC m=+1097.669522145" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.085243 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.194992 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ad67827-6ed7-48ce-842d-413a84f9171d-operator-scripts\") pod \"6ad67827-6ed7-48ce-842d-413a84f9171d\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.195109 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5733f514-65f3-49c8-a40b-586eae0eb996-operator-scripts\") pod \"5733f514-65f3-49c8-a40b-586eae0eb996\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.195202 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xv5c\" (UniqueName: \"kubernetes.io/projected/5733f514-65f3-49c8-a40b-586eae0eb996-kube-api-access-2xv5c\") pod \"5733f514-65f3-49c8-a40b-586eae0eb996\" (UID: \"5733f514-65f3-49c8-a40b-586eae0eb996\") " Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.195249 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9tng\" (UniqueName: \"kubernetes.io/projected/6ad67827-6ed7-48ce-842d-413a84f9171d-kube-api-access-d9tng\") pod \"6ad67827-6ed7-48ce-842d-413a84f9171d\" (UID: \"6ad67827-6ed7-48ce-842d-413a84f9171d\") " Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.196083 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5733f514-65f3-49c8-a40b-586eae0eb996-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5733f514-65f3-49c8-a40b-586eae0eb996" (UID: "5733f514-65f3-49c8-a40b-586eae0eb996"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.196216 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ad67827-6ed7-48ce-842d-413a84f9171d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ad67827-6ed7-48ce-842d-413a84f9171d" (UID: "6ad67827-6ed7-48ce-842d-413a84f9171d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.196606 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ad67827-6ed7-48ce-842d-413a84f9171d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.196631 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5733f514-65f3-49c8-a40b-586eae0eb996-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.200652 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad67827-6ed7-48ce-842d-413a84f9171d-kube-api-access-d9tng" (OuterVolumeSpecName: "kube-api-access-d9tng") pod "6ad67827-6ed7-48ce-842d-413a84f9171d" (UID: "6ad67827-6ed7-48ce-842d-413a84f9171d"). InnerVolumeSpecName "kube-api-access-d9tng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.201783 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5733f514-65f3-49c8-a40b-586eae0eb996-kube-api-access-2xv5c" (OuterVolumeSpecName: "kube-api-access-2xv5c") pod "5733f514-65f3-49c8-a40b-586eae0eb996" (UID: "5733f514-65f3-49c8-a40b-586eae0eb996"). InnerVolumeSpecName "kube-api-access-2xv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.298419 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xv5c\" (UniqueName: \"kubernetes.io/projected/5733f514-65f3-49c8-a40b-586eae0eb996-kube-api-access-2xv5c\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.298475 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9tng\" (UniqueName: \"kubernetes.io/projected/6ad67827-6ed7-48ce-842d-413a84f9171d-kube-api-access-d9tng\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.732441 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-b692-account-create-update-mmrg5" event={"ID":"6ad67827-6ed7-48ce-842d-413a84f9171d","Type":"ContainerDied","Data":"c4ba167c74526c84d36f1e41f5c34eab951bf694a9d032ca7d4c0a6cfa211c89"} Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.732485 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-b692-account-create-update-mmrg5" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.732510 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ba167c74526c84d36f1e41f5c34eab951bf694a9d032ca7d4c0a6cfa211c89" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.735103 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68382ea2-c66d-4ea6-be55-f77490a81898","Type":"ContainerStarted","Data":"73cf8d8256363d62f3d12bd8515146600a417cf4c98d2026a6fb206b9679e1b1"} Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.735326 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.736940 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-2hqd8" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.736950 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-2hqd8" event={"ID":"5733f514-65f3-49c8-a40b-586eae0eb996","Type":"ContainerDied","Data":"e6ec06c9a0d72bee4fe6e0e279d196f326bd6eae7e0d6ae944a7ccc9ffedf6e4"} Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.736978 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ec06c9a0d72bee4fe6e0e279d196f326bd6eae7e0d6ae944a7ccc9ffedf6e4" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.767780 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.256453779 podStartE2EDuration="2.767763267s" podCreationTimestamp="2026-02-16 12:49:50 +0000 UTC" firstStartedPulling="2026-02-16 12:49:50.922883016 +0000 UTC m=+1096.515898350" lastFinishedPulling="2026-02-16 12:49:51.434192514 +0000 UTC m=+1097.027207838" observedRunningTime="2026-02-16 12:49:52.758027675 +0000 UTC m=+1098.351043009" watchObservedRunningTime="2026-02-16 12:49:52.767763267 +0000 UTC m=+1098.360778601" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.949808 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h2lxg"] Feb 16 12:49:52 crc kubenswrapper[4799]: E0216 12:49:52.950899 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad67827-6ed7-48ce-842d-413a84f9171d" containerName="mariadb-account-create-update" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.950919 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad67827-6ed7-48ce-842d-413a84f9171d" containerName="mariadb-account-create-update" Feb 16 12:49:52 crc kubenswrapper[4799]: E0216 12:49:52.950933 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb2bead-5a19-4828-9940-6836514e80cf" containerName="init" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.950939 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb2bead-5a19-4828-9940-6836514e80cf" containerName="init" Feb 16 12:49:52 crc kubenswrapper[4799]: E0216 12:49:52.950978 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5733f514-65f3-49c8-a40b-586eae0eb996" containerName="mariadb-database-create" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.950986 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5733f514-65f3-49c8-a40b-586eae0eb996" containerName="mariadb-database-create" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.951208 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad67827-6ed7-48ce-842d-413a84f9171d" containerName="mariadb-account-create-update" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.951234 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5733f514-65f3-49c8-a40b-586eae0eb996" containerName="mariadb-database-create" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.951245 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb2bead-5a19-4828-9940-6836514e80cf" containerName="init" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.952108 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.954070 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 12:49:52 crc kubenswrapper[4799]: I0216 12:49:52.969072 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2lxg"] Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.009562 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eaff024-98d4-4065-9392-6786558aa720-operator-scripts\") pod \"root-account-create-update-h2lxg\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.009654 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfgg\" (UniqueName: \"kubernetes.io/projected/3eaff024-98d4-4065-9392-6786558aa720-kube-api-access-kkfgg\") pod \"root-account-create-update-h2lxg\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.111611 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eaff024-98d4-4065-9392-6786558aa720-operator-scripts\") pod \"root-account-create-update-h2lxg\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.111669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfgg\" (UniqueName: \"kubernetes.io/projected/3eaff024-98d4-4065-9392-6786558aa720-kube-api-access-kkfgg\") pod \"root-account-create-update-h2lxg\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.113085 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eaff024-98d4-4065-9392-6786558aa720-operator-scripts\") pod \"root-account-create-update-h2lxg\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.132339 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfgg\" (UniqueName: \"kubernetes.io/projected/3eaff024-98d4-4065-9392-6786558aa720-kube-api-access-kkfgg\") pod \"root-account-create-update-h2lxg\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.280808 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.315550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:53 crc kubenswrapper[4799]: E0216 12:49:53.315759 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 12:49:53 crc kubenswrapper[4799]: E0216 12:49:53.315791 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 12:49:53 crc kubenswrapper[4799]: E0216 12:49:53.315845 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift podName:95bfd980-54e7-4b29-a896-dc1cc52291fd nodeName:}" failed. No retries permitted until 2026-02-16 12:49:57.31582978 +0000 UTC m=+1102.908845114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift") pod "swift-storage-0" (UID: "95bfd980-54e7-4b29-a896-dc1cc52291fd") : configmap "swift-ring-files" not found Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.320259 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j6ghf"] Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.321563 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.324747 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.324784 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.325031 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.335691 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j6ghf"] Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.417756 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86mp\" (UniqueName: \"kubernetes.io/projected/e330eb09-5b74-44cd-9812-1aaada5f979c-kube-api-access-q86mp\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.417815 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-ring-data-devices\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.418091 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e330eb09-5b74-44cd-9812-1aaada5f979c-etc-swift\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.418288 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-combined-ca-bundle\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.418432 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-dispersionconf\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.418513 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-scripts\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.418553 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-swiftconf\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.519887 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-combined-ca-bundle\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.519981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-dispersionconf\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.520024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-scripts\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.520050 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-swiftconf\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.520080 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q86mp\" (UniqueName: \"kubernetes.io/projected/e330eb09-5b74-44cd-9812-1aaada5f979c-kube-api-access-q86mp\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.520106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-ring-data-devices\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.520218 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e330eb09-5b74-44cd-9812-1aaada5f979c-etc-swift\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.520738 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e330eb09-5b74-44cd-9812-1aaada5f979c-etc-swift\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.522037 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-ring-data-devices\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.523335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-scripts\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.528975 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-combined-ca-bundle\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.529712 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-dispersionconf\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.542299 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-swiftconf\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.550632 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q86mp\" (UniqueName: \"kubernetes.io/projected/e330eb09-5b74-44cd-9812-1aaada5f979c-kube-api-access-q86mp\") pod \"swift-ring-rebalance-j6ghf\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:53 crc kubenswrapper[4799]: I0216 12:49:53.642481 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.202210 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j6ghf"] Feb 16 12:49:56 crc kubenswrapper[4799]: W0216 12:49:56.212676 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode330eb09_5b74_44cd_9812_1aaada5f979c.slice/crio-9dfd09c797619b48352fd2aa0628a134017412a4d3363ba9adce97eed051229c WatchSource:0}: Error finding container 9dfd09c797619b48352fd2aa0628a134017412a4d3363ba9adce97eed051229c: Status 404 returned error can't find the container with id 9dfd09c797619b48352fd2aa0628a134017412a4d3363ba9adce97eed051229c Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.219092 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-snrvh"] Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.220640 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.228260 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-snrvh"] Feb 16 12:49:56 crc kubenswrapper[4799]: W0216 12:49:56.274174 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eaff024_98d4_4065_9392_6786558aa720.slice/crio-00557856672d278d53f2a41ae9269ccff749696935e98c7788e60ca0d1c9f59b WatchSource:0}: Error finding container 00557856672d278d53f2a41ae9269ccff749696935e98c7788e60ca0d1c9f59b: Status 404 returned error can't find the container with id 00557856672d278d53f2a41ae9269ccff749696935e98c7788e60ca0d1c9f59b Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.274912 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2lxg"] Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.295149 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smsqc\" (UniqueName: \"kubernetes.io/projected/c90542f3-200f-4070-b73a-3b8bfd004fdc-kube-api-access-smsqc\") pod \"glance-db-create-snrvh\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.295616 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90542f3-200f-4070-b73a-3b8bfd004fdc-operator-scripts\") pod \"glance-db-create-snrvh\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.335610 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-82de-account-create-update-t8zkv"] Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.337216 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.342564 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.349597 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-82de-account-create-update-t8zkv"] Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.397295 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smsqc\" (UniqueName: \"kubernetes.io/projected/c90542f3-200f-4070-b73a-3b8bfd004fdc-kube-api-access-smsqc\") pod \"glance-db-create-snrvh\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.397383 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90542f3-200f-4070-b73a-3b8bfd004fdc-operator-scripts\") pod \"glance-db-create-snrvh\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.398578 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90542f3-200f-4070-b73a-3b8bfd004fdc-operator-scripts\") pod \"glance-db-create-snrvh\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.418842 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smsqc\" (UniqueName: \"kubernetes.io/projected/c90542f3-200f-4070-b73a-3b8bfd004fdc-kube-api-access-smsqc\") pod \"glance-db-create-snrvh\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.499733 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4x6\" (UniqueName: \"kubernetes.io/projected/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-kube-api-access-qc4x6\") pod \"glance-82de-account-create-update-t8zkv\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.499824 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-operator-scripts\") pod \"glance-82de-account-create-update-t8zkv\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.553050 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snrvh" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.601024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-operator-scripts\") pod \"glance-82de-account-create-update-t8zkv\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.601240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4x6\" (UniqueName: \"kubernetes.io/projected/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-kube-api-access-qc4x6\") pod \"glance-82de-account-create-update-t8zkv\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.601995 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-operator-scripts\") pod \"glance-82de-account-create-update-t8zkv\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.621526 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4x6\" (UniqueName: \"kubernetes.io/projected/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-kube-api-access-qc4x6\") pod \"glance-82de-account-create-update-t8zkv\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.771354 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.786317 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lxg" event={"ID":"3eaff024-98d4-4065-9392-6786558aa720","Type":"ContainerStarted","Data":"992403e79b53313dd3821cf0f592b6e35116ab2864fdb3629455fa8dcc9e7e93"} Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.786358 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lxg" event={"ID":"3eaff024-98d4-4065-9392-6786558aa720","Type":"ContainerStarted","Data":"00557856672d278d53f2a41ae9269ccff749696935e98c7788e60ca0d1c9f59b"} Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.789538 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerStarted","Data":"f233826153818b953c7c0806a3d1aa5f379a1798f7799f53a3b37914e7663993"} Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.791139 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6ghf" event={"ID":"e330eb09-5b74-44cd-9812-1aaada5f979c","Type":"ContainerStarted","Data":"9dfd09c797619b48352fd2aa0628a134017412a4d3363ba9adce97eed051229c"} Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.806398 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h2lxg" podStartSLOduration=4.806380223 podStartE2EDuration="4.806380223s" podCreationTimestamp="2026-02-16 12:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:49:56.799750701 +0000 UTC m=+1102.392766035" watchObservedRunningTime="2026-02-16 12:49:56.806380223 +0000 UTC m=+1102.399395557" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.967753 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ms4pb"] Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.968868 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:56 crc kubenswrapper[4799]: I0216 12:49:56.979507 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ms4pb"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.006082 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-snrvh"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.093514 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-df4c-account-create-update-qbwnq"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.094773 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.097279 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.101815 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-df4c-account-create-update-qbwnq"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.110579 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl477\" (UniqueName: \"kubernetes.io/projected/26a1b93a-7a9e-49a5-8264-a9afb09de45d-kube-api-access-dl477\") pod \"keystone-db-create-ms4pb\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.110764 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26a1b93a-7a9e-49a5-8264-a9afb09de45d-operator-scripts\") pod \"keystone-db-create-ms4pb\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.185950 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bq6vr"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.187177 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.194447 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bq6vr"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.212431 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0089fc-1608-4b31-9219-bff2d2cbed59-operator-scripts\") pod \"keystone-df4c-account-create-update-qbwnq\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.212668 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2nnm\" (UniqueName: \"kubernetes.io/projected/4d0089fc-1608-4b31-9219-bff2d2cbed59-kube-api-access-k2nnm\") pod \"keystone-df4c-account-create-update-qbwnq\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.213613 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26a1b93a-7a9e-49a5-8264-a9afb09de45d-operator-scripts\") pod \"keystone-db-create-ms4pb\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.214429 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26a1b93a-7a9e-49a5-8264-a9afb09de45d-operator-scripts\") pod \"keystone-db-create-ms4pb\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.214588 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl477\" (UniqueName: \"kubernetes.io/projected/26a1b93a-7a9e-49a5-8264-a9afb09de45d-kube-api-access-dl477\") pod \"keystone-db-create-ms4pb\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.235161 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl477\" (UniqueName: \"kubernetes.io/projected/26a1b93a-7a9e-49a5-8264-a9afb09de45d-kube-api-access-dl477\") pod \"keystone-db-create-ms4pb\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.291640 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ms4pb" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.317699 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.317795 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wnkn\" (UniqueName: \"kubernetes.io/projected/140b9f7a-d350-46a3-bd9d-83180f2d839b-kube-api-access-2wnkn\") pod \"placement-db-create-bq6vr\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.317840 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0089fc-1608-4b31-9219-bff2d2cbed59-operator-scripts\") pod \"keystone-df4c-account-create-update-qbwnq\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.317883 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140b9f7a-d350-46a3-bd9d-83180f2d839b-operator-scripts\") pod \"placement-db-create-bq6vr\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.317947 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2nnm\" (UniqueName: \"kubernetes.io/projected/4d0089fc-1608-4b31-9219-bff2d2cbed59-kube-api-access-k2nnm\") pod \"keystone-df4c-account-create-update-qbwnq\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: E0216 12:49:57.318300 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 12:49:57 crc kubenswrapper[4799]: E0216 12:49:57.318312 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 12:49:57 crc kubenswrapper[4799]: E0216 12:49:57.318350 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift podName:95bfd980-54e7-4b29-a896-dc1cc52291fd nodeName:}" failed. No retries permitted until 2026-02-16 12:50:05.31833511 +0000 UTC m=+1110.911350444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift") pod "swift-storage-0" (UID: "95bfd980-54e7-4b29-a896-dc1cc52291fd") : configmap "swift-ring-files" not found Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.319416 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0089fc-1608-4b31-9219-bff2d2cbed59-operator-scripts\") pod \"keystone-df4c-account-create-update-qbwnq\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.335666 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2nnm\" (UniqueName: \"kubernetes.io/projected/4d0089fc-1608-4b31-9219-bff2d2cbed59-kube-api-access-k2nnm\") pod \"keystone-df4c-account-create-update-qbwnq\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.385703 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d05a-account-create-update-7zctx"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.387146 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.389012 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.398211 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d05a-account-create-update-7zctx"] Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.414951 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.419786 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnkn\" (UniqueName: \"kubernetes.io/projected/140b9f7a-d350-46a3-bd9d-83180f2d839b-kube-api-access-2wnkn\") pod \"placement-db-create-bq6vr\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.419863 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140b9f7a-d350-46a3-bd9d-83180f2d839b-operator-scripts\") pod \"placement-db-create-bq6vr\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.420473 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140b9f7a-d350-46a3-bd9d-83180f2d839b-operator-scripts\") pod \"placement-db-create-bq6vr\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.436328 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnkn\" (UniqueName: \"kubernetes.io/projected/140b9f7a-d350-46a3-bd9d-83180f2d839b-kube-api-access-2wnkn\") pod \"placement-db-create-bq6vr\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.512486 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bq6vr" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.522057 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvw6k\" (UniqueName: \"kubernetes.io/projected/7801634e-581b-49f9-b90a-a1cd47b1d2fb-kube-api-access-jvw6k\") pod \"placement-d05a-account-create-update-7zctx\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.522104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7801634e-581b-49f9-b90a-a1cd47b1d2fb-operator-scripts\") pod \"placement-d05a-account-create-update-7zctx\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: W0216 12:49:57.612415 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc90542f3_200f_4070_b73a_3b8bfd004fdc.slice/crio-0aa7da40ddda6869cbc6271e995e2a3687c2a7555ee3b24296ac00dfda997a20 WatchSource:0}: Error finding container 0aa7da40ddda6869cbc6271e995e2a3687c2a7555ee3b24296ac00dfda997a20: Status 404 returned error can't find the container with id 0aa7da40ddda6869cbc6271e995e2a3687c2a7555ee3b24296ac00dfda997a20 Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.623716 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvw6k\" (UniqueName: \"kubernetes.io/projected/7801634e-581b-49f9-b90a-a1cd47b1d2fb-kube-api-access-jvw6k\") pod \"placement-d05a-account-create-update-7zctx\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.623770 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7801634e-581b-49f9-b90a-a1cd47b1d2fb-operator-scripts\") pod \"placement-d05a-account-create-update-7zctx\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.624524 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7801634e-581b-49f9-b90a-a1cd47b1d2fb-operator-scripts\") pod \"placement-d05a-account-create-update-7zctx\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.640215 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvw6k\" (UniqueName: \"kubernetes.io/projected/7801634e-581b-49f9-b90a-a1cd47b1d2fb-kube-api-access-jvw6k\") pod \"placement-d05a-account-create-update-7zctx\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.714681 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.801390 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snrvh" event={"ID":"c90542f3-200f-4070-b73a-3b8bfd004fdc","Type":"ContainerStarted","Data":"0aa7da40ddda6869cbc6271e995e2a3687c2a7555ee3b24296ac00dfda997a20"} Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.803554 4799 generic.go:334] "Generic (PLEG): container finished" podID="3eaff024-98d4-4065-9392-6786558aa720" containerID="992403e79b53313dd3821cf0f592b6e35116ab2864fdb3629455fa8dcc9e7e93" exitCode=0 Feb 16 12:49:57 crc kubenswrapper[4799]: I0216 12:49:57.803580 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lxg" event={"ID":"3eaff024-98d4-4065-9392-6786558aa720","Type":"ContainerDied","Data":"992403e79b53313dd3821cf0f592b6e35116ab2864fdb3629455fa8dcc9e7e93"} Feb 16 12:49:58 crc kubenswrapper[4799]: I0216 12:49:58.808602 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:49:58 crc kubenswrapper[4799]: I0216 12:49:58.884304 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f8f5886f-zzspb"] Feb 16 12:49:58 crc kubenswrapper[4799]: I0216 12:49:58.884944 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" containerName="dnsmasq-dns" containerID="cri-o://886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e" gracePeriod=10 Feb 16 12:49:58 crc kubenswrapper[4799]: I0216 12:49:58.890311 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerStarted","Data":"c3f321bfaa92e5cc62a77dfc2c67710158e95540673cd1e66e556b54e609c988"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.211794 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.256446 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eaff024-98d4-4065-9392-6786558aa720-operator-scripts\") pod \"3eaff024-98d4-4065-9392-6786558aa720\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.256586 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfgg\" (UniqueName: \"kubernetes.io/projected/3eaff024-98d4-4065-9392-6786558aa720-kube-api-access-kkfgg\") pod \"3eaff024-98d4-4065-9392-6786558aa720\" (UID: \"3eaff024-98d4-4065-9392-6786558aa720\") " Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.262891 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eaff024-98d4-4065-9392-6786558aa720-kube-api-access-kkfgg" (OuterVolumeSpecName: "kube-api-access-kkfgg") pod "3eaff024-98d4-4065-9392-6786558aa720" (UID: "3eaff024-98d4-4065-9392-6786558aa720"). InnerVolumeSpecName "kube-api-access-kkfgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.264574 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eaff024-98d4-4065-9392-6786558aa720-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3eaff024-98d4-4065-9392-6786558aa720" (UID: "3eaff024-98d4-4065-9392-6786558aa720"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.360050 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfgg\" (UniqueName: \"kubernetes.io/projected/3eaff024-98d4-4065-9392-6786558aa720-kube-api-access-kkfgg\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.360087 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eaff024-98d4-4065-9392-6786558aa720-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.366590 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-82de-account-create-update-t8zkv"] Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.449570 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.604408 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bq6vr"] Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.638750 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h2lxg"] Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.652851 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h2lxg"] Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.673395 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlw7w\" (UniqueName: \"kubernetes.io/projected/21585780-9181-47a1-beb1-72cbd9970fb9-kube-api-access-jlw7w\") pod \"21585780-9181-47a1-beb1-72cbd9970fb9\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.673543 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-dns-svc\") pod \"21585780-9181-47a1-beb1-72cbd9970fb9\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.674274 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-config\") pod \"21585780-9181-47a1-beb1-72cbd9970fb9\" (UID: \"21585780-9181-47a1-beb1-72cbd9970fb9\") " Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.683770 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21585780-9181-47a1-beb1-72cbd9970fb9-kube-api-access-jlw7w" (OuterVolumeSpecName: "kube-api-access-jlw7w") pod "21585780-9181-47a1-beb1-72cbd9970fb9" (UID: "21585780-9181-47a1-beb1-72cbd9970fb9"). InnerVolumeSpecName "kube-api-access-jlw7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.686790 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-df4c-account-create-update-qbwnq"] Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.727544 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-config" (OuterVolumeSpecName: "config") pod "21585780-9181-47a1-beb1-72cbd9970fb9" (UID: "21585780-9181-47a1-beb1-72cbd9970fb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.729167 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21585780-9181-47a1-beb1-72cbd9970fb9" (UID: "21585780-9181-47a1-beb1-72cbd9970fb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.776672 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.776732 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21585780-9181-47a1-beb1-72cbd9970fb9-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.776742 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlw7w\" (UniqueName: \"kubernetes.io/projected/21585780-9181-47a1-beb1-72cbd9970fb9-kube-api-access-jlw7w\") on node \"crc\" DevicePath \"\"" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.822338 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d05a-account-create-update-7zctx"] Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.836049 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ms4pb"] Feb 16 12:49:59 crc kubenswrapper[4799]: W0216 12:49:59.851822 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7801634e_581b_49f9_b90a_a1cd47b1d2fb.slice/crio-c20bace00d9f0c5403b79a2a636b2f2115641b3a1d72f4f1e78364e1e0b416dc WatchSource:0}: Error finding container c20bace00d9f0c5403b79a2a636b2f2115641b3a1d72f4f1e78364e1e0b416dc: Status 404 returned error can't find the container with id c20bace00d9f0c5403b79a2a636b2f2115641b3a1d72f4f1e78364e1e0b416dc Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.910684 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-df4c-account-create-update-qbwnq" event={"ID":"4d0089fc-1608-4b31-9219-bff2d2cbed59","Type":"ContainerStarted","Data":"7fa0a52bb3b80d306a01a82f98c00040fffae648ec0d57cba662079a98479f50"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.912683 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ms4pb" event={"ID":"26a1b93a-7a9e-49a5-8264-a9afb09de45d","Type":"ContainerStarted","Data":"96ee6c0d23c3bb70c6c6837e761ffc241339cc5c33258325ca9ad61daa35c01a"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.923568 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6ghf" event={"ID":"e330eb09-5b74-44cd-9812-1aaada5f979c","Type":"ContainerStarted","Data":"7fb92dd74d059fcb4aabb50253ca64b1a544af19dd37c891fd7df4361349c3ed"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.926725 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d05a-account-create-update-7zctx" event={"ID":"7801634e-581b-49f9-b90a-a1cd47b1d2fb","Type":"ContainerStarted","Data":"c20bace00d9f0c5403b79a2a636b2f2115641b3a1d72f4f1e78364e1e0b416dc"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.930069 4799 generic.go:334] "Generic (PLEG): container finished" podID="c90542f3-200f-4070-b73a-3b8bfd004fdc" containerID="9c829cb841312ef1c6100b12c46d70ae8424e3889fac5f1c891d6206dd980ef3" exitCode=0 Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.930146 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snrvh" event={"ID":"c90542f3-200f-4070-b73a-3b8bfd004fdc","Type":"ContainerDied","Data":"9c829cb841312ef1c6100b12c46d70ae8424e3889fac5f1c891d6206dd980ef3"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.932036 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-82de-account-create-update-t8zkv" event={"ID":"60b4b8f9-9fae-4b55-8906-8bc269dc9f19","Type":"ContainerStarted","Data":"e222cb2d9a4e02771095659a3b0aa4c20bb3797650c080c586ae6417dba4b1fe"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.932061 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-82de-account-create-update-t8zkv" event={"ID":"60b4b8f9-9fae-4b55-8906-8bc269dc9f19","Type":"ContainerStarted","Data":"cfaf04fa604b9b0c6a05728e44e9ae7bfe76eab490db329c023a5f40c819aa26"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.941849 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j6ghf" podStartSLOduration=4.352736258 podStartE2EDuration="6.94183173s" podCreationTimestamp="2026-02-16 12:49:53 +0000 UTC" firstStartedPulling="2026-02-16 12:49:56.215065346 +0000 UTC m=+1101.808080680" lastFinishedPulling="2026-02-16 12:49:58.804160818 +0000 UTC m=+1104.397176152" observedRunningTime="2026-02-16 12:49:59.940268865 +0000 UTC m=+1105.533284199" watchObservedRunningTime="2026-02-16 12:49:59.94183173 +0000 UTC m=+1105.534847064" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.942436 4799 generic.go:334] "Generic (PLEG): container finished" podID="21585780-9181-47a1-beb1-72cbd9970fb9" containerID="886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e" exitCode=0 Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.942584 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" event={"ID":"21585780-9181-47a1-beb1-72cbd9970fb9","Type":"ContainerDied","Data":"886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.942617 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" event={"ID":"21585780-9181-47a1-beb1-72cbd9970fb9","Type":"ContainerDied","Data":"a9ad7471776086cd24252c3afc0980204ec162c3106ab2e5488cdb5adb7d6d3f"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.942637 4799 scope.go:117] "RemoveContainer" containerID="886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.942780 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f8f5886f-zzspb" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.945692 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00557856672d278d53f2a41ae9269ccff749696935e98c7788e60ca0d1c9f59b" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.945785 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lxg" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.969562 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bq6vr" event={"ID":"140b9f7a-d350-46a3-bd9d-83180f2d839b","Type":"ContainerStarted","Data":"06c12a495515d85cd207806c7f60f0ef7e828a57791ade96c4133d1a2f516ec5"} Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.984186 4799 scope.go:117] "RemoveContainer" containerID="c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e" Feb 16 12:49:59 crc kubenswrapper[4799]: I0216 12:49:59.989161 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-82de-account-create-update-t8zkv" podStartSLOduration=3.988621696 podStartE2EDuration="3.988621696s" podCreationTimestamp="2026-02-16 12:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:49:59.976905577 +0000 UTC m=+1105.569920921" watchObservedRunningTime="2026-02-16 12:49:59.988621696 +0000 UTC m=+1105.581637030" Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.040346 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f8f5886f-zzspb"] Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.042584 4799 scope.go:117] "RemoveContainer" containerID="886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e" Feb 16 12:50:00 crc kubenswrapper[4799]: E0216 12:50:00.043031 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e\": container with ID starting with 886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e not found: ID does not exist" containerID="886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e" Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.043087 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e"} err="failed to get container status \"886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e\": rpc error: code = NotFound desc = could not find container \"886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e\": container with ID starting with 886a00efb97e1a511793fc28992bee35a0290aba659ce8f9871ad965a87e467e not found: ID does not exist" Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.043114 4799 scope.go:117] "RemoveContainer" containerID="c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e" Feb 16 12:50:00 crc kubenswrapper[4799]: E0216 12:50:00.043807 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e\": container with ID starting with c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e not found: ID does not exist" containerID="c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e" Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.043853 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e"} err="failed to get container status \"c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e\": rpc error: code = NotFound desc = could not find container \"c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e\": container with ID starting with c6472dd2d0747a1beafcec7a04e805ea0bc425a4ee237476c8d71fe9efdc783e not found: ID does not exist" Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.055535 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69f8f5886f-zzspb"] Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.985361 4799 generic.go:334] "Generic (PLEG): container finished" podID="140b9f7a-d350-46a3-bd9d-83180f2d839b" containerID="94c71a38347edea2ea51aba3a544b5ad15abb91269fda2cd25c87a6bbce94efc" exitCode=0 Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.985433 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bq6vr" event={"ID":"140b9f7a-d350-46a3-bd9d-83180f2d839b","Type":"ContainerDied","Data":"94c71a38347edea2ea51aba3a544b5ad15abb91269fda2cd25c87a6bbce94efc"} Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.987661 4799 generic.go:334] "Generic (PLEG): container finished" podID="60b4b8f9-9fae-4b55-8906-8bc269dc9f19" containerID="e222cb2d9a4e02771095659a3b0aa4c20bb3797650c080c586ae6417dba4b1fe" exitCode=0 Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.987930 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-82de-account-create-update-t8zkv" event={"ID":"60b4b8f9-9fae-4b55-8906-8bc269dc9f19","Type":"ContainerDied","Data":"e222cb2d9a4e02771095659a3b0aa4c20bb3797650c080c586ae6417dba4b1fe"} Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.989852 4799 generic.go:334] "Generic (PLEG): container finished" podID="26a1b93a-7a9e-49a5-8264-a9afb09de45d" containerID="ed45fc02ff39e9b6f6e6a25d8cc23fc9941e17bd530bec411fd19708e7df0a92" exitCode=0 Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.989907 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ms4pb" event={"ID":"26a1b93a-7a9e-49a5-8264-a9afb09de45d","Type":"ContainerDied","Data":"ed45fc02ff39e9b6f6e6a25d8cc23fc9941e17bd530bec411fd19708e7df0a92"} Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.992024 4799 generic.go:334] "Generic (PLEG): container finished" podID="7801634e-581b-49f9-b90a-a1cd47b1d2fb" containerID="d362b8456ce4a9e33524e5b966bfb8bf280acac31c39c28446516f704dd4edfc" exitCode=0 Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.992073 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d05a-account-create-update-7zctx" event={"ID":"7801634e-581b-49f9-b90a-a1cd47b1d2fb","Type":"ContainerDied","Data":"d362b8456ce4a9e33524e5b966bfb8bf280acac31c39c28446516f704dd4edfc"} Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.993310 4799 generic.go:334] "Generic (PLEG): container finished" podID="4d0089fc-1608-4b31-9219-bff2d2cbed59" containerID="bfdffbf0c2790705c63dbf605ba7e8cd906ac7b132fe2364a598b5710bdfe6f3" exitCode=0 Feb 16 12:50:00 crc kubenswrapper[4799]: I0216 12:50:00.993497 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-df4c-account-create-update-qbwnq" event={"ID":"4d0089fc-1608-4b31-9219-bff2d2cbed59","Type":"ContainerDied","Data":"bfdffbf0c2790705c63dbf605ba7e8cd906ac7b132fe2364a598b5710bdfe6f3"} Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.160827 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" path="/var/lib/kubelet/pods/21585780-9181-47a1-beb1-72cbd9970fb9/volumes" Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.162038 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eaff024-98d4-4065-9392-6786558aa720" path="/var/lib/kubelet/pods/3eaff024-98d4-4065-9392-6786558aa720/volumes" Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.325470 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snrvh" Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.405500 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90542f3-200f-4070-b73a-3b8bfd004fdc-operator-scripts\") pod \"c90542f3-200f-4070-b73a-3b8bfd004fdc\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.405633 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smsqc\" (UniqueName: \"kubernetes.io/projected/c90542f3-200f-4070-b73a-3b8bfd004fdc-kube-api-access-smsqc\") pod \"c90542f3-200f-4070-b73a-3b8bfd004fdc\" (UID: \"c90542f3-200f-4070-b73a-3b8bfd004fdc\") " Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.413809 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90542f3-200f-4070-b73a-3b8bfd004fdc-kube-api-access-smsqc" (OuterVolumeSpecName: "kube-api-access-smsqc") pod "c90542f3-200f-4070-b73a-3b8bfd004fdc" (UID: "c90542f3-200f-4070-b73a-3b8bfd004fdc"). InnerVolumeSpecName "kube-api-access-smsqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.429766 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90542f3-200f-4070-b73a-3b8bfd004fdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c90542f3-200f-4070-b73a-3b8bfd004fdc" (UID: "c90542f3-200f-4070-b73a-3b8bfd004fdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.507860 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smsqc\" (UniqueName: \"kubernetes.io/projected/c90542f3-200f-4070-b73a-3b8bfd004fdc-kube-api-access-smsqc\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:01 crc kubenswrapper[4799]: I0216 12:50:01.507919 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90542f3-200f-4070-b73a-3b8bfd004fdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.005976 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snrvh" event={"ID":"c90542f3-200f-4070-b73a-3b8bfd004fdc","Type":"ContainerDied","Data":"0aa7da40ddda6869cbc6271e995e2a3687c2a7555ee3b24296ac00dfda997a20"} Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.006057 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snrvh" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.006056 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa7da40ddda6869cbc6271e995e2a3687c2a7555ee3b24296ac00dfda997a20" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.440356 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bq6vr" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.532678 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wnkn\" (UniqueName: \"kubernetes.io/projected/140b9f7a-d350-46a3-bd9d-83180f2d839b-kube-api-access-2wnkn\") pod \"140b9f7a-d350-46a3-bd9d-83180f2d839b\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.532899 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140b9f7a-d350-46a3-bd9d-83180f2d839b-operator-scripts\") pod \"140b9f7a-d350-46a3-bd9d-83180f2d839b\" (UID: \"140b9f7a-d350-46a3-bd9d-83180f2d839b\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.536956 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140b9f7a-d350-46a3-bd9d-83180f2d839b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "140b9f7a-d350-46a3-bd9d-83180f2d839b" (UID: "140b9f7a-d350-46a3-bd9d-83180f2d839b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.549378 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140b9f7a-d350-46a3-bd9d-83180f2d839b-kube-api-access-2wnkn" (OuterVolumeSpecName: "kube-api-access-2wnkn") pod "140b9f7a-d350-46a3-bd9d-83180f2d839b" (UID: "140b9f7a-d350-46a3-bd9d-83180f2d839b"). InnerVolumeSpecName "kube-api-access-2wnkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.635007 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140b9f7a-d350-46a3-bd9d-83180f2d839b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.635313 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wnkn\" (UniqueName: \"kubernetes.io/projected/140b9f7a-d350-46a3-bd9d-83180f2d839b-kube-api-access-2wnkn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.710285 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.721066 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ms4pb" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.767473 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.768725 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc4x6\" (UniqueName: \"kubernetes.io/projected/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-kube-api-access-qc4x6\") pod \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.768812 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-operator-scripts\") pod \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\" (UID: \"60b4b8f9-9fae-4b55-8906-8bc269dc9f19\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.768897 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl477\" (UniqueName: \"kubernetes.io/projected/26a1b93a-7a9e-49a5-8264-a9afb09de45d-kube-api-access-dl477\") pod \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.768941 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26a1b93a-7a9e-49a5-8264-a9afb09de45d-operator-scripts\") pod \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\" (UID: \"26a1b93a-7a9e-49a5-8264-a9afb09de45d\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.772240 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a1b93a-7a9e-49a5-8264-a9afb09de45d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26a1b93a-7a9e-49a5-8264-a9afb09de45d" (UID: "26a1b93a-7a9e-49a5-8264-a9afb09de45d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.774367 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60b4b8f9-9fae-4b55-8906-8bc269dc9f19" (UID: "60b4b8f9-9fae-4b55-8906-8bc269dc9f19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.777005 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.777815 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-kube-api-access-qc4x6" (OuterVolumeSpecName: "kube-api-access-qc4x6") pod "60b4b8f9-9fae-4b55-8906-8bc269dc9f19" (UID: "60b4b8f9-9fae-4b55-8906-8bc269dc9f19"). InnerVolumeSpecName "kube-api-access-qc4x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.779308 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a1b93a-7a9e-49a5-8264-a9afb09de45d-kube-api-access-dl477" (OuterVolumeSpecName: "kube-api-access-dl477") pod "26a1b93a-7a9e-49a5-8264-a9afb09de45d" (UID: "26a1b93a-7a9e-49a5-8264-a9afb09de45d"). InnerVolumeSpecName "kube-api-access-dl477". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.872585 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2nnm\" (UniqueName: \"kubernetes.io/projected/4d0089fc-1608-4b31-9219-bff2d2cbed59-kube-api-access-k2nnm\") pod \"4d0089fc-1608-4b31-9219-bff2d2cbed59\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.872677 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7801634e-581b-49f9-b90a-a1cd47b1d2fb-operator-scripts\") pod \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.872772 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0089fc-1608-4b31-9219-bff2d2cbed59-operator-scripts\") pod \"4d0089fc-1608-4b31-9219-bff2d2cbed59\" (UID: \"4d0089fc-1608-4b31-9219-bff2d2cbed59\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.872796 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvw6k\" (UniqueName: \"kubernetes.io/projected/7801634e-581b-49f9-b90a-a1cd47b1d2fb-kube-api-access-jvw6k\") pod \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\" (UID: \"7801634e-581b-49f9-b90a-a1cd47b1d2fb\") " Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.873154 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d0089fc-1608-4b31-9219-bff2d2cbed59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d0089fc-1608-4b31-9219-bff2d2cbed59" (UID: "4d0089fc-1608-4b31-9219-bff2d2cbed59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.873157 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7801634e-581b-49f9-b90a-a1cd47b1d2fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7801634e-581b-49f9-b90a-a1cd47b1d2fb" (UID: "7801634e-581b-49f9-b90a-a1cd47b1d2fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.873296 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc4x6\" (UniqueName: \"kubernetes.io/projected/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-kube-api-access-qc4x6\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.873315 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b4b8f9-9fae-4b55-8906-8bc269dc9f19-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.873326 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl477\" (UniqueName: \"kubernetes.io/projected/26a1b93a-7a9e-49a5-8264-a9afb09de45d-kube-api-access-dl477\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.873360 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26a1b93a-7a9e-49a5-8264-a9afb09de45d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.876231 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7801634e-581b-49f9-b90a-a1cd47b1d2fb-kube-api-access-jvw6k" (OuterVolumeSpecName: "kube-api-access-jvw6k") pod "7801634e-581b-49f9-b90a-a1cd47b1d2fb" (UID: "7801634e-581b-49f9-b90a-a1cd47b1d2fb"). InnerVolumeSpecName "kube-api-access-jvw6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.879256 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0089fc-1608-4b31-9219-bff2d2cbed59-kube-api-access-k2nnm" (OuterVolumeSpecName: "kube-api-access-k2nnm") pod "4d0089fc-1608-4b31-9219-bff2d2cbed59" (UID: "4d0089fc-1608-4b31-9219-bff2d2cbed59"). InnerVolumeSpecName "kube-api-access-k2nnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.984260 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2nnm\" (UniqueName: \"kubernetes.io/projected/4d0089fc-1608-4b31-9219-bff2d2cbed59-kube-api-access-k2nnm\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.984295 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7801634e-581b-49f9-b90a-a1cd47b1d2fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.984303 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d0089fc-1608-4b31-9219-bff2d2cbed59-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:02 crc kubenswrapper[4799]: I0216 12:50:02.984312 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvw6k\" (UniqueName: \"kubernetes.io/projected/7801634e-581b-49f9-b90a-a1cd47b1d2fb-kube-api-access-jvw6k\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.004851 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6zwsg"] Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005212 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" containerName="dnsmasq-dns" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005223 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" containerName="dnsmasq-dns" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005237 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b4b8f9-9fae-4b55-8906-8bc269dc9f19" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005243 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b4b8f9-9fae-4b55-8906-8bc269dc9f19" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005254 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0089fc-1608-4b31-9219-bff2d2cbed59" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005260 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0089fc-1608-4b31-9219-bff2d2cbed59" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005267 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7801634e-581b-49f9-b90a-a1cd47b1d2fb" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005275 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7801634e-581b-49f9-b90a-a1cd47b1d2fb" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005287 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a1b93a-7a9e-49a5-8264-a9afb09de45d" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005292 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a1b93a-7a9e-49a5-8264-a9afb09de45d" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005303 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90542f3-200f-4070-b73a-3b8bfd004fdc" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005310 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90542f3-200f-4070-b73a-3b8bfd004fdc" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005322 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" containerName="init" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005327 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" containerName="init" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005336 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140b9f7a-d350-46a3-bd9d-83180f2d839b" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005343 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="140b9f7a-d350-46a3-bd9d-83180f2d839b" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: E0216 12:50:03.005384 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eaff024-98d4-4065-9392-6786558aa720" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005391 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eaff024-98d4-4065-9392-6786558aa720" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005533 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eaff024-98d4-4065-9392-6786558aa720" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005543 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="140b9f7a-d350-46a3-bd9d-83180f2d839b" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005554 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0089fc-1608-4b31-9219-bff2d2cbed59" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005565 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="21585780-9181-47a1-beb1-72cbd9970fb9" containerName="dnsmasq-dns" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005577 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90542f3-200f-4070-b73a-3b8bfd004fdc" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005584 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7801634e-581b-49f9-b90a-a1cd47b1d2fb" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005594 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b4b8f9-9fae-4b55-8906-8bc269dc9f19" containerName="mariadb-account-create-update" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.005601 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a1b93a-7a9e-49a5-8264-a9afb09de45d" containerName="mariadb-database-create" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.006205 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.016101 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6zwsg"] Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.026881 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.031087 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-df4c-account-create-update-qbwnq" event={"ID":"4d0089fc-1608-4b31-9219-bff2d2cbed59","Type":"ContainerDied","Data":"7fa0a52bb3b80d306a01a82f98c00040fffae648ec0d57cba662079a98479f50"} Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.031141 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-df4c-account-create-update-qbwnq" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.031399 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa0a52bb3b80d306a01a82f98c00040fffae648ec0d57cba662079a98479f50" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.033680 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bq6vr" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.033683 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bq6vr" event={"ID":"140b9f7a-d350-46a3-bd9d-83180f2d839b","Type":"ContainerDied","Data":"06c12a495515d85cd207806c7f60f0ef7e828a57791ade96c4133d1a2f516ec5"} Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.033721 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c12a495515d85cd207806c7f60f0ef7e828a57791ade96c4133d1a2f516ec5" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.042065 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-82de-account-create-update-t8zkv" event={"ID":"60b4b8f9-9fae-4b55-8906-8bc269dc9f19","Type":"ContainerDied","Data":"cfaf04fa604b9b0c6a05728e44e9ae7bfe76eab490db329c023a5f40c819aa26"} Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.042314 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfaf04fa604b9b0c6a05728e44e9ae7bfe76eab490db329c023a5f40c819aa26" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.042261 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-82de-account-create-update-t8zkv" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.045041 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ms4pb" event={"ID":"26a1b93a-7a9e-49a5-8264-a9afb09de45d","Type":"ContainerDied","Data":"96ee6c0d23c3bb70c6c6837e761ffc241339cc5c33258325ca9ad61daa35c01a"} Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.045083 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ee6c0d23c3bb70c6c6837e761ffc241339cc5c33258325ca9ad61daa35c01a" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.045151 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ms4pb" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.047635 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d05a-account-create-update-7zctx" event={"ID":"7801634e-581b-49f9-b90a-a1cd47b1d2fb","Type":"ContainerDied","Data":"c20bace00d9f0c5403b79a2a636b2f2115641b3a1d72f4f1e78364e1e0b416dc"} Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.047659 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20bace00d9f0c5403b79a2a636b2f2115641b3a1d72f4f1e78364e1e0b416dc" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.047695 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d05a-account-create-update-7zctx" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.086316 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/914636c3-4c84-4910-9eb5-2879e2bcad54-operator-scripts\") pod \"root-account-create-update-6zwsg\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.086436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdrbw\" (UniqueName: \"kubernetes.io/projected/914636c3-4c84-4910-9eb5-2879e2bcad54-kube-api-access-rdrbw\") pod \"root-account-create-update-6zwsg\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.188183 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/914636c3-4c84-4910-9eb5-2879e2bcad54-operator-scripts\") pod \"root-account-create-update-6zwsg\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.188675 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdrbw\" (UniqueName: \"kubernetes.io/projected/914636c3-4c84-4910-9eb5-2879e2bcad54-kube-api-access-rdrbw\") pod \"root-account-create-update-6zwsg\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.189159 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/914636c3-4c84-4910-9eb5-2879e2bcad54-operator-scripts\") pod \"root-account-create-update-6zwsg\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.216630 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdrbw\" (UniqueName: \"kubernetes.io/projected/914636c3-4c84-4910-9eb5-2879e2bcad54-kube-api-access-rdrbw\") pod \"root-account-create-update-6zwsg\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.349521 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:03 crc kubenswrapper[4799]: I0216 12:50:03.836182 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6zwsg"] Feb 16 12:50:03 crc kubenswrapper[4799]: W0216 12:50:03.848368 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod914636c3_4c84_4910_9eb5_2879e2bcad54.slice/crio-859475e775b2a982987f55f382f7c3614fcc651e374e60385c4f42929240cc04 WatchSource:0}: Error finding container 859475e775b2a982987f55f382f7c3614fcc651e374e60385c4f42929240cc04: Status 404 returned error can't find the container with id 859475e775b2a982987f55f382f7c3614fcc651e374e60385c4f42929240cc04 Feb 16 12:50:04 crc kubenswrapper[4799]: I0216 12:50:04.059236 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6zwsg" event={"ID":"914636c3-4c84-4910-9eb5-2879e2bcad54","Type":"ContainerStarted","Data":"859475e775b2a982987f55f382f7c3614fcc651e374e60385c4f42929240cc04"} Feb 16 12:50:05 crc kubenswrapper[4799]: I0216 12:50:05.073515 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerStarted","Data":"dc8c809f41a3c6d5bb196e795ccb1922696d25f9d178af72c5b242b22dd352fd"} Feb 16 12:50:05 crc kubenswrapper[4799]: I0216 12:50:05.078390 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6zwsg" event={"ID":"914636c3-4c84-4910-9eb5-2879e2bcad54","Type":"ContainerStarted","Data":"6cd0469fed761d02e54904414d4e2e7a73772546160cbcf48fea78999d533374"} Feb 16 12:50:05 crc kubenswrapper[4799]: I0216 12:50:05.101874 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.578482483 podStartE2EDuration="47.101834976s" podCreationTimestamp="2026-02-16 12:49:18 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.24182063 +0000 UTC m=+1073.834835984" lastFinishedPulling="2026-02-16 12:50:04.765173143 +0000 UTC m=+1110.358188477" observedRunningTime="2026-02-16 12:50:05.094756551 +0000 UTC m=+1110.687771885" watchObservedRunningTime="2026-02-16 12:50:05.101834976 +0000 UTC m=+1110.694850350" Feb 16 12:50:05 crc kubenswrapper[4799]: I0216 12:50:05.118527 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6zwsg" podStartSLOduration=3.118484549 podStartE2EDuration="3.118484549s" podCreationTimestamp="2026-02-16 12:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:05.114080652 +0000 UTC m=+1110.707095986" watchObservedRunningTime="2026-02-16 12:50:05.118484549 +0000 UTC m=+1110.711499873" Feb 16 12:50:05 crc kubenswrapper[4799]: I0216 12:50:05.336970 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:50:05 crc kubenswrapper[4799]: E0216 12:50:05.337383 4799 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 12:50:05 crc kubenswrapper[4799]: E0216 12:50:05.337432 4799 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 12:50:05 crc kubenswrapper[4799]: E0216 12:50:05.337483 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift podName:95bfd980-54e7-4b29-a896-dc1cc52291fd nodeName:}" failed. No retries permitted until 2026-02-16 12:50:21.33746677 +0000 UTC m=+1126.930482114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift") pod "swift-storage-0" (UID: "95bfd980-54e7-4b29-a896-dc1cc52291fd") : configmap "swift-ring-files" not found Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.088413 4799 generic.go:334] "Generic (PLEG): container finished" podID="914636c3-4c84-4910-9eb5-2879e2bcad54" containerID="6cd0469fed761d02e54904414d4e2e7a73772546160cbcf48fea78999d533374" exitCode=0 Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.088472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6zwsg" event={"ID":"914636c3-4c84-4910-9eb5-2879e2bcad54","Type":"ContainerDied","Data":"6cd0469fed761d02e54904414d4e2e7a73772546160cbcf48fea78999d533374"} Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.465153 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mxmd5"] Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.466423 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.468248 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.469087 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4c8qx" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.482272 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mxmd5"] Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.561169 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-combined-ca-bundle\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.561218 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-config-data\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.561243 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-db-sync-config-data\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.561280 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzxnj\" (UniqueName: \"kubernetes.io/projected/ff79791d-f33a-4986-9dd4-67c6af5bf747-kube-api-access-nzxnj\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.663764 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-combined-ca-bundle\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.663824 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-config-data\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.663856 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-db-sync-config-data\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.663905 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzxnj\" (UniqueName: \"kubernetes.io/projected/ff79791d-f33a-4986-9dd4-67c6af5bf747-kube-api-access-nzxnj\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.672343 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-config-data\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.672801 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-combined-ca-bundle\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.675586 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-db-sync-config-data\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.685080 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzxnj\" (UniqueName: \"kubernetes.io/projected/ff79791d-f33a-4986-9dd4-67c6af5bf747-kube-api-access-nzxnj\") pod \"glance-db-sync-mxmd5\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:06 crc kubenswrapper[4799]: I0216 12:50:06.789688 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.428473 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mxmd5"] Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.430001 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:07 crc kubenswrapper[4799]: W0216 12:50:07.436032 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff79791d_f33a_4986_9dd4_67c6af5bf747.slice/crio-6cf91dfd0163e7059c3c9a7cfd99a8a9ad02792d64617dedccb184cdfb298121 WatchSource:0}: Error finding container 6cf91dfd0163e7059c3c9a7cfd99a8a9ad02792d64617dedccb184cdfb298121: Status 404 returned error can't find the container with id 6cf91dfd0163e7059c3c9a7cfd99a8a9ad02792d64617dedccb184cdfb298121 Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.483595 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/914636c3-4c84-4910-9eb5-2879e2bcad54-operator-scripts\") pod \"914636c3-4c84-4910-9eb5-2879e2bcad54\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.483677 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdrbw\" (UniqueName: \"kubernetes.io/projected/914636c3-4c84-4910-9eb5-2879e2bcad54-kube-api-access-rdrbw\") pod \"914636c3-4c84-4910-9eb5-2879e2bcad54\" (UID: \"914636c3-4c84-4910-9eb5-2879e2bcad54\") " Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.484980 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914636c3-4c84-4910-9eb5-2879e2bcad54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "914636c3-4c84-4910-9eb5-2879e2bcad54" (UID: "914636c3-4c84-4910-9eb5-2879e2bcad54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.488871 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914636c3-4c84-4910-9eb5-2879e2bcad54-kube-api-access-rdrbw" (OuterVolumeSpecName: "kube-api-access-rdrbw") pod "914636c3-4c84-4910-9eb5-2879e2bcad54" (UID: "914636c3-4c84-4910-9eb5-2879e2bcad54"). InnerVolumeSpecName "kube-api-access-rdrbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.585454 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/914636c3-4c84-4910-9eb5-2879e2bcad54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:07 crc kubenswrapper[4799]: I0216 12:50:07.585502 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdrbw\" (UniqueName: \"kubernetes.io/projected/914636c3-4c84-4910-9eb5-2879e2bcad54-kube-api-access-rdrbw\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:08 crc kubenswrapper[4799]: I0216 12:50:08.106493 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxmd5" event={"ID":"ff79791d-f33a-4986-9dd4-67c6af5bf747","Type":"ContainerStarted","Data":"6cf91dfd0163e7059c3c9a7cfd99a8a9ad02792d64617dedccb184cdfb298121"} Feb 16 12:50:08 crc kubenswrapper[4799]: I0216 12:50:08.108609 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6zwsg" event={"ID":"914636c3-4c84-4910-9eb5-2879e2bcad54","Type":"ContainerDied","Data":"859475e775b2a982987f55f382f7c3614fcc651e374e60385c4f42929240cc04"} Feb 16 12:50:08 crc kubenswrapper[4799]: I0216 12:50:08.108644 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="859475e775b2a982987f55f382f7c3614fcc651e374e60385c4f42929240cc04" Feb 16 12:50:08 crc kubenswrapper[4799]: I0216 12:50:08.108700 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6zwsg" Feb 16 12:50:09 crc kubenswrapper[4799]: I0216 12:50:09.516116 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6zwsg"] Feb 16 12:50:09 crc kubenswrapper[4799]: I0216 12:50:09.517984 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:09 crc kubenswrapper[4799]: I0216 12:50:09.521949 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6zwsg"] Feb 16 12:50:10 crc kubenswrapper[4799]: I0216 12:50:10.135416 4799 generic.go:334] "Generic (PLEG): container finished" podID="e330eb09-5b74-44cd-9812-1aaada5f979c" containerID="7fb92dd74d059fcb4aabb50253ca64b1a544af19dd37c891fd7df4361349c3ed" exitCode=0 Feb 16 12:50:10 crc kubenswrapper[4799]: I0216 12:50:10.135468 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6ghf" event={"ID":"e330eb09-5b74-44cd-9812-1aaada5f979c","Type":"ContainerDied","Data":"7fb92dd74d059fcb4aabb50253ca64b1a544af19dd37c891fd7df4361349c3ed"} Feb 16 12:50:10 crc kubenswrapper[4799]: I0216 12:50:10.497486 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.163258 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914636c3-4c84-4910-9eb5-2879e2bcad54" path="/var/lib/kubelet/pods/914636c3-4c84-4910-9eb5-2879e2bcad54/volumes" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.325772 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wr6ph" podUID="d0a8e986-71a6-47cc-a34e-ddc323df4af4" containerName="ovn-controller" probeResult="failure" output=< Feb 16 12:50:11 crc kubenswrapper[4799]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 12:50:11 crc kubenswrapper[4799]: > Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.421095 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.428527 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6rnj7" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.541343 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.556832 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q86mp\" (UniqueName: \"kubernetes.io/projected/e330eb09-5b74-44cd-9812-1aaada5f979c-kube-api-access-q86mp\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.557007 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-dispersionconf\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.557054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e330eb09-5b74-44cd-9812-1aaada5f979c-etc-swift\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.557099 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-combined-ca-bundle\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.557178 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-swiftconf\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.557215 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-ring-data-devices\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.557304 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-scripts\") pod \"e330eb09-5b74-44cd-9812-1aaada5f979c\" (UID: \"e330eb09-5b74-44cd-9812-1aaada5f979c\") " Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.561775 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.562720 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e330eb09-5b74-44cd-9812-1aaada5f979c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.565336 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e330eb09-5b74-44cd-9812-1aaada5f979c-kube-api-access-q86mp" (OuterVolumeSpecName: "kube-api-access-q86mp") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "kube-api-access-q86mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.572185 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.580229 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-scripts" (OuterVolumeSpecName: "scripts") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.615176 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.621303 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e330eb09-5b74-44cd-9812-1aaada5f979c" (UID: "e330eb09-5b74-44cd-9812-1aaada5f979c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659526 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659560 4799 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659591 4799 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659601 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330eb09-5b74-44cd-9812-1aaada5f979c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659610 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q86mp\" (UniqueName: \"kubernetes.io/projected/e330eb09-5b74-44cd-9812-1aaada5f979c-kube-api-access-q86mp\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659633 4799 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e330eb09-5b74-44cd-9812-1aaada5f979c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.659642 4799 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e330eb09-5b74-44cd-9812-1aaada5f979c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.685347 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wr6ph-config-27h4v"] Feb 16 12:50:11 crc kubenswrapper[4799]: E0216 12:50:11.685861 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914636c3-4c84-4910-9eb5-2879e2bcad54" containerName="mariadb-account-create-update" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.685889 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="914636c3-4c84-4910-9eb5-2879e2bcad54" containerName="mariadb-account-create-update" Feb 16 12:50:11 crc kubenswrapper[4799]: E0216 12:50:11.685901 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e330eb09-5b74-44cd-9812-1aaada5f979c" containerName="swift-ring-rebalance" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.685924 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e330eb09-5b74-44cd-9812-1aaada5f979c" containerName="swift-ring-rebalance" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.686169 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="914636c3-4c84-4910-9eb5-2879e2bcad54" containerName="mariadb-account-create-update" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.686226 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e330eb09-5b74-44cd-9812-1aaada5f979c" containerName="swift-ring-rebalance" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.686963 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.691419 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.708889 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-27h4v"] Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.761104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run-ovn\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.761187 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-scripts\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.761225 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-log-ovn\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.761305 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.761343 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm7p6\" (UniqueName: \"kubernetes.io/projected/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-kube-api-access-vm7p6\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.761370 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-additional-scripts\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.863765 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run-ovn\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.863836 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-scripts\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.863875 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-log-ovn\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.863923 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.863952 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm7p6\" (UniqueName: \"kubernetes.io/projected/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-kube-api-access-vm7p6\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.864214 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run-ovn\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.864293 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-log-ovn\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.864358 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.864402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-additional-scripts\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.865104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-additional-scripts\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.866661 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-scripts\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:11 crc kubenswrapper[4799]: I0216 12:50:11.882548 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm7p6\" (UniqueName: \"kubernetes.io/projected/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-kube-api-access-vm7p6\") pod \"ovn-controller-wr6ph-config-27h4v\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.011953 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.163984 4799 generic.go:334] "Generic (PLEG): container finished" podID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerID="ab78b8d9b5f8e466b857a5f3123961b938a51fbc0fdeca53ac77857645a6278b" exitCode=0 Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.164174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8af3fbd4-c626-4920-915d-0f50d12662b6","Type":"ContainerDied","Data":"ab78b8d9b5f8e466b857a5f3123961b938a51fbc0fdeca53ac77857645a6278b"} Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.172944 4799 generic.go:334] "Generic (PLEG): container finished" podID="5b6ff320-8742-454a-9a6e-766db7e2c3a8" containerID="0e738a235bf1a03a4fb291657c4ea978e5d51909cd447a2ba9108184991c5070" exitCode=0 Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.173042 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5b6ff320-8742-454a-9a6e-766db7e2c3a8","Type":"ContainerDied","Data":"0e738a235bf1a03a4fb291657c4ea978e5d51909cd447a2ba9108184991c5070"} Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.181461 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6ghf" Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.181514 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6ghf" event={"ID":"e330eb09-5b74-44cd-9812-1aaada5f979c","Type":"ContainerDied","Data":"9dfd09c797619b48352fd2aa0628a134017412a4d3363ba9adce97eed051229c"} Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.181922 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfd09c797619b48352fd2aa0628a134017412a4d3363ba9adce97eed051229c" Feb 16 12:50:12 crc kubenswrapper[4799]: I0216 12:50:12.526067 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-27h4v"] Feb 16 12:50:12 crc kubenswrapper[4799]: W0216 12:50:12.527328 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32fcb0cb_4efa_4b4e_8945_482b85e9acd1.slice/crio-3979f7df712c4d3ddb192db1569c865a033e0103a2294d8468b89521bf20f9fc WatchSource:0}: Error finding container 3979f7df712c4d3ddb192db1569c865a033e0103a2294d8468b89521bf20f9fc: Status 404 returned error can't find the container with id 3979f7df712c4d3ddb192db1569c865a033e0103a2294d8468b89521bf20f9fc Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.004422 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-86tqv"] Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.007043 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.014637 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.028006 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-86tqv"] Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.089307 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a520be06-55f1-4803-b8ec-d5fa426da969-operator-scripts\") pod \"root-account-create-update-86tqv\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.089386 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7rvq\" (UniqueName: \"kubernetes.io/projected/a520be06-55f1-4803-b8ec-d5fa426da969-kube-api-access-b7rvq\") pod \"root-account-create-update-86tqv\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.190915 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a520be06-55f1-4803-b8ec-d5fa426da969-operator-scripts\") pod \"root-account-create-update-86tqv\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.191006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7rvq\" (UniqueName: \"kubernetes.io/projected/a520be06-55f1-4803-b8ec-d5fa426da969-kube-api-access-b7rvq\") pod \"root-account-create-update-86tqv\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.192390 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a520be06-55f1-4803-b8ec-d5fa426da969-operator-scripts\") pod \"root-account-create-update-86tqv\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.193422 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5b6ff320-8742-454a-9a6e-766db7e2c3a8","Type":"ContainerStarted","Data":"bd852a356d05146d184f43353721bc615cf6c8339bad00d82c30af60945334f0"} Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.193653 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.197783 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-27h4v" event={"ID":"32fcb0cb-4efa-4b4e-8945-482b85e9acd1","Type":"ContainerStarted","Data":"3979f7df712c4d3ddb192db1569c865a033e0103a2294d8468b89521bf20f9fc"} Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.199355 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8af3fbd4-c626-4920-915d-0f50d12662b6","Type":"ContainerStarted","Data":"81152b23dfbe435acd5f67f4e28899693c445d87d9cafae393f5f1445510a537"} Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.199999 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.210441 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7rvq\" (UniqueName: \"kubernetes.io/projected/a520be06-55f1-4803-b8ec-d5fa426da969-kube-api-access-b7rvq\") pod \"root-account-create-update-86tqv\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.232358 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=54.153628369 podStartE2EDuration="1m2.232336516s" podCreationTimestamp="2026-02-16 12:49:11 +0000 UTC" firstStartedPulling="2026-02-16 12:49:26.279710741 +0000 UTC m=+1071.872726075" lastFinishedPulling="2026-02-16 12:49:34.358418878 +0000 UTC m=+1079.951434222" observedRunningTime="2026-02-16 12:50:13.230527684 +0000 UTC m=+1118.823543018" watchObservedRunningTime="2026-02-16 12:50:13.232336516 +0000 UTC m=+1118.825351850" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.261631 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.533681359 podStartE2EDuration="1m2.261615875s" podCreationTimestamp="2026-02-16 12:49:11 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.21320488 +0000 UTC m=+1073.806220264" lastFinishedPulling="2026-02-16 12:49:34.941139446 +0000 UTC m=+1080.534154780" observedRunningTime="2026-02-16 12:50:13.256751604 +0000 UTC m=+1118.849766938" watchObservedRunningTime="2026-02-16 12:50:13.261615875 +0000 UTC m=+1118.854631209" Feb 16 12:50:13 crc kubenswrapper[4799]: I0216 12:50:13.334873 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:14 crc kubenswrapper[4799]: I0216 12:50:14.044100 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-86tqv"] Feb 16 12:50:14 crc kubenswrapper[4799]: I0216 12:50:14.209013 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-86tqv" event={"ID":"a520be06-55f1-4803-b8ec-d5fa426da969","Type":"ContainerStarted","Data":"6bb92be9bd023717ab5d3802a4e898a7d6252b5896ac7796f539bd42f01ea51b"} Feb 16 12:50:14 crc kubenswrapper[4799]: I0216 12:50:14.211917 4799 generic.go:334] "Generic (PLEG): container finished" podID="32fcb0cb-4efa-4b4e-8945-482b85e9acd1" containerID="6132e87ef3b709ae85adf98fb009b758f5bba35dd5dbb36931fac3276a325e8b" exitCode=0 Feb 16 12:50:14 crc kubenswrapper[4799]: I0216 12:50:14.211972 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-27h4v" event={"ID":"32fcb0cb-4efa-4b4e-8945-482b85e9acd1","Type":"ContainerDied","Data":"6132e87ef3b709ae85adf98fb009b758f5bba35dd5dbb36931fac3276a325e8b"} Feb 16 12:50:15 crc kubenswrapper[4799]: I0216 12:50:15.224196 4799 generic.go:334] "Generic (PLEG): container finished" podID="a520be06-55f1-4803-b8ec-d5fa426da969" containerID="6f518bec20066a7212091392726168c088d4d5961c302aaa4d6e508aebb28360" exitCode=0 Feb 16 12:50:15 crc kubenswrapper[4799]: I0216 12:50:15.224293 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-86tqv" event={"ID":"a520be06-55f1-4803-b8ec-d5fa426da969","Type":"ContainerDied","Data":"6f518bec20066a7212091392726168c088d4d5961c302aaa4d6e508aebb28360"} Feb 16 12:50:16 crc kubenswrapper[4799]: I0216 12:50:16.334841 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wr6ph" Feb 16 12:50:19 crc kubenswrapper[4799]: I0216 12:50:19.257181 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerID="1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096" exitCode=0 Feb 16 12:50:19 crc kubenswrapper[4799]: I0216 12:50:19.257195 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e3da06f-f1ef-4b8c-963b-0994cde5fab7","Type":"ContainerDied","Data":"1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096"} Feb 16 12:50:19 crc kubenswrapper[4799]: I0216 12:50:19.518627 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:19 crc kubenswrapper[4799]: I0216 12:50:19.520897 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:20 crc kubenswrapper[4799]: I0216 12:50:20.269324 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:21 crc kubenswrapper[4799]: I0216 12:50:21.355535 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:50:21 crc kubenswrapper[4799]: I0216 12:50:21.379225 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/95bfd980-54e7-4b29-a896-dc1cc52291fd-etc-swift\") pod \"swift-storage-0\" (UID: \"95bfd980-54e7-4b29-a896-dc1cc52291fd\") " pod="openstack/swift-storage-0" Feb 16 12:50:21 crc kubenswrapper[4799]: I0216 12:50:21.397720 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.584071 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.585744 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.586229 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="prometheus" containerID="cri-o://f233826153818b953c7c0806a3d1aa5f379a1798f7799f53a3b37914e7663993" gracePeriod=600 Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.586922 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="thanos-sidecar" containerID="cri-o://dc8c809f41a3c6d5bb196e795ccb1922696d25f9d178af72c5b242b22dd352fd" gracePeriod=600 Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.587095 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="config-reloader" containerID="cri-o://c3f321bfaa92e5cc62a77dfc2c67710158e95540673cd1e66e556b54e609c988" gracePeriod=600 Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.611078 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.623841 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.678440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7rvq\" (UniqueName: \"kubernetes.io/projected/a520be06-55f1-4803-b8ec-d5fa426da969-kube-api-access-b7rvq\") pod \"a520be06-55f1-4803-b8ec-d5fa426da969\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.678486 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a520be06-55f1-4803-b8ec-d5fa426da969-operator-scripts\") pod \"a520be06-55f1-4803-b8ec-d5fa426da969\" (UID: \"a520be06-55f1-4803-b8ec-d5fa426da969\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.680179 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a520be06-55f1-4803-b8ec-d5fa426da969-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a520be06-55f1-4803-b8ec-d5fa426da969" (UID: "a520be06-55f1-4803-b8ec-d5fa426da969"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.711425 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a520be06-55f1-4803-b8ec-d5fa426da969-kube-api-access-b7rvq" (OuterVolumeSpecName: "kube-api-access-b7rvq") pod "a520be06-55f1-4803-b8ec-d5fa426da969" (UID: "a520be06-55f1-4803-b8ec-d5fa426da969"). InnerVolumeSpecName "kube-api-access-b7rvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780064 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm7p6\" (UniqueName: \"kubernetes.io/projected/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-kube-api-access-vm7p6\") pod \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780413 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-scripts\") pod \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780468 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-additional-scripts\") pod \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780645 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run\") pod \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780676 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-log-ovn\") pod \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780695 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run-ovn\") pod \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\" (UID: \"32fcb0cb-4efa-4b4e-8945-482b85e9acd1\") " Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780864 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run" (OuterVolumeSpecName: "var-run") pod "32fcb0cb-4efa-4b4e-8945-482b85e9acd1" (UID: "32fcb0cb-4efa-4b4e-8945-482b85e9acd1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.780981 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "32fcb0cb-4efa-4b4e-8945-482b85e9acd1" (UID: "32fcb0cb-4efa-4b4e-8945-482b85e9acd1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781069 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "32fcb0cb-4efa-4b4e-8945-482b85e9acd1" (UID: "32fcb0cb-4efa-4b4e-8945-482b85e9acd1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781351 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7rvq\" (UniqueName: \"kubernetes.io/projected/a520be06-55f1-4803-b8ec-d5fa426da969-kube-api-access-b7rvq\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781432 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "32fcb0cb-4efa-4b4e-8945-482b85e9acd1" (UID: "32fcb0cb-4efa-4b4e-8945-482b85e9acd1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781446 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a520be06-55f1-4803-b8ec-d5fa426da969-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781564 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781629 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.781764 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-scripts" (OuterVolumeSpecName: "scripts") pod "32fcb0cb-4efa-4b4e-8945-482b85e9acd1" (UID: "32fcb0cb-4efa-4b4e-8945-482b85e9acd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.784201 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-kube-api-access-vm7p6" (OuterVolumeSpecName: "kube-api-access-vm7p6") pod "32fcb0cb-4efa-4b4e-8945-482b85e9acd1" (UID: "32fcb0cb-4efa-4b4e-8945-482b85e9acd1"). InnerVolumeSpecName "kube-api-access-vm7p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.882749 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.882783 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm7p6\" (UniqueName: \"kubernetes.io/projected/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-kube-api-access-vm7p6\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.882793 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:22 crc kubenswrapper[4799]: I0216 12:50:22.882801 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32fcb0cb-4efa-4b4e-8945-482b85e9acd1-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.085528 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 12:50:23 crc kubenswrapper[4799]: W0216 12:50:23.090272 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95bfd980_54e7_4b29_a896_dc1cc52291fd.slice/crio-364e1aacdc11420fbfc1012f2d3ac6aada1b08d0736904271feedce0de0d9239 WatchSource:0}: Error finding container 364e1aacdc11420fbfc1012f2d3ac6aada1b08d0736904271feedce0de0d9239: Status 404 returned error can't find the container with id 364e1aacdc11420fbfc1012f2d3ac6aada1b08d0736904271feedce0de0d9239 Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.201320 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="5b6ff320-8742-454a-9a6e-766db7e2c3a8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.291816 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-27h4v" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.291916 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-27h4v" event={"ID":"32fcb0cb-4efa-4b4e-8945-482b85e9acd1","Type":"ContainerDied","Data":"3979f7df712c4d3ddb192db1569c865a033e0103a2294d8468b89521bf20f9fc"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.291946 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3979f7df712c4d3ddb192db1569c865a033e0103a2294d8468b89521bf20f9fc" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.294342 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e3da06f-f1ef-4b8c-963b-0994cde5fab7","Type":"ContainerStarted","Data":"5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.294681 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.296524 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxmd5" event={"ID":"ff79791d-f33a-4986-9dd4-67c6af5bf747","Type":"ContainerStarted","Data":"9d36bcf0e9b91e3d6eefd123eee0031ce1c5f0a0aa56b88ef64d8673381beb5f"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.300977 4799 generic.go:334] "Generic (PLEG): container finished" podID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerID="dc8c809f41a3c6d5bb196e795ccb1922696d25f9d178af72c5b242b22dd352fd" exitCode=0 Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.301014 4799 generic.go:334] "Generic (PLEG): container finished" podID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerID="c3f321bfaa92e5cc62a77dfc2c67710158e95540673cd1e66e556b54e609c988" exitCode=0 Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.301023 4799 generic.go:334] "Generic (PLEG): container finished" podID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerID="f233826153818b953c7c0806a3d1aa5f379a1798f7799f53a3b37914e7663993" exitCode=0 Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.301060 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerDied","Data":"dc8c809f41a3c6d5bb196e795ccb1922696d25f9d178af72c5b242b22dd352fd"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.301084 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerDied","Data":"c3f321bfaa92e5cc62a77dfc2c67710158e95540673cd1e66e556b54e609c988"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.301093 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerDied","Data":"f233826153818b953c7c0806a3d1aa5f379a1798f7799f53a3b37914e7663993"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.303556 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"364e1aacdc11420fbfc1012f2d3ac6aada1b08d0736904271feedce0de0d9239"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.305284 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-86tqv" event={"ID":"a520be06-55f1-4803-b8ec-d5fa426da969","Type":"ContainerDied","Data":"6bb92be9bd023717ab5d3802a4e898a7d6252b5896ac7796f539bd42f01ea51b"} Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.305301 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb92be9bd023717ab5d3802a4e898a7d6252b5896ac7796f539bd42f01ea51b" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.305343 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-86tqv" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.321545 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371964.533249 podStartE2EDuration="1m12.321525995s" podCreationTimestamp="2026-02-16 12:49:11 +0000 UTC" firstStartedPulling="2026-02-16 12:49:28.290882043 +0000 UTC m=+1073.883897377" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:23.319055604 +0000 UTC m=+1128.912070938" watchObservedRunningTime="2026-02-16 12:50:23.321525995 +0000 UTC m=+1128.914541329" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.343973 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mxmd5" podStartSLOduration=2.314493851 podStartE2EDuration="17.343951006s" podCreationTimestamp="2026-02-16 12:50:06 +0000 UTC" firstStartedPulling="2026-02-16 12:50:07.438852488 +0000 UTC m=+1113.031867822" lastFinishedPulling="2026-02-16 12:50:22.468309643 +0000 UTC m=+1128.061324977" observedRunningTime="2026-02-16 12:50:23.337052986 +0000 UTC m=+1128.930068320" watchObservedRunningTime="2026-02-16 12:50:23.343951006 +0000 UTC m=+1128.936966340" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.775291 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wr6ph-config-27h4v"] Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.787924 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wr6ph-config-27h4v"] Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.819110 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wr6ph-config-bs8gq"] Feb 16 12:50:23 crc kubenswrapper[4799]: E0216 12:50:23.819596 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a520be06-55f1-4803-b8ec-d5fa426da969" containerName="mariadb-account-create-update" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.819616 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a520be06-55f1-4803-b8ec-d5fa426da969" containerName="mariadb-account-create-update" Feb 16 12:50:23 crc kubenswrapper[4799]: E0216 12:50:23.819645 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fcb0cb-4efa-4b4e-8945-482b85e9acd1" containerName="ovn-config" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.819654 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fcb0cb-4efa-4b4e-8945-482b85e9acd1" containerName="ovn-config" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.819858 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fcb0cb-4efa-4b4e-8945-482b85e9acd1" containerName="ovn-config" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.819875 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a520be06-55f1-4803-b8ec-d5fa426da969" containerName="mariadb-account-create-update" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.820635 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.834786 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-bs8gq"] Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.856340 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.906760 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-additional-scripts\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.906812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sdj8\" (UniqueName: \"kubernetes.io/projected/deeaee17-4e43-4847-a859-c623cfb5a2c6-kube-api-access-9sdj8\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.906982 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run-ovn\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.907049 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.907101 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-scripts\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.907232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-log-ovn\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:23 crc kubenswrapper[4799]: I0216 12:50:23.947546 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.009709 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-tls-assets\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.009820 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config-out\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.009863 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-0\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.009992 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010030 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010068 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwql\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-kube-api-access-kzwql\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010094 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-web-config\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010184 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-2\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010256 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-thanos-prometheus-http-client-file\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010317 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-1\") pod \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\" (UID: \"98c6ac1b-2c6b-42f1-831c-e98661c6166d\") " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010581 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run-ovn\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010666 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010700 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-scripts\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-log-ovn\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010824 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-additional-scripts\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.010851 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sdj8\" (UniqueName: \"kubernetes.io/projected/deeaee17-4e43-4847-a859-c623cfb5a2c6-kube-api-access-9sdj8\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.011244 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.011345 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.011573 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-log-ovn\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.011705 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.011810 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run-ovn\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.012421 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.015018 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-scripts\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.015718 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-additional-scripts\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.024244 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.028661 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-kube-api-access-kzwql" (OuterVolumeSpecName: "kube-api-access-kzwql") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "kube-api-access-kzwql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.029470 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config-out" (OuterVolumeSpecName: "config-out") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.029515 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config" (OuterVolumeSpecName: "config") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.033683 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.039788 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sdj8\" (UniqueName: \"kubernetes.io/projected/deeaee17-4e43-4847-a859-c623cfb5a2c6-kube-api-access-9sdj8\") pod \"ovn-controller-wr6ph-config-bs8gq\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.052940 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-web-config" (OuterVolumeSpecName: "web-config") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.057577 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "98c6ac1b-2c6b-42f1-831c-e98661c6166d" (UID: "98c6ac1b-2c6b-42f1-831c-e98661c6166d"). InnerVolumeSpecName "pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112359 4799 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112679 4799 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112690 4799 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112698 4799 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112731 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") on node \"crc\" " Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112744 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112756 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwql\" (UniqueName: \"kubernetes.io/projected/98c6ac1b-2c6b-42f1-831c-e98661c6166d-kube-api-access-kzwql\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112765 4799 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112773 4799 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/98c6ac1b-2c6b-42f1-831c-e98661c6166d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.112785 4799 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/98c6ac1b-2c6b-42f1-831c-e98661c6166d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.137510 4799 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.137690 4799 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd") on node "crc" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.216260 4799 reconciler_common.go:293] "Volume detached for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.263475 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.343685 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"98c6ac1b-2c6b-42f1-831c-e98661c6166d","Type":"ContainerDied","Data":"7dfc3ea490a6aec87c74d9374265462dec182356c09372d2e06fa56583dbd106"} Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.343738 4799 scope.go:117] "RemoveContainer" containerID="dc8c809f41a3c6d5bb196e795ccb1922696d25f9d178af72c5b242b22dd352fd" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.343868 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.364731 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"92441a2cc22db72817a1a7192920d6e6d2f8b7d7abdaffd875dabcee710047d3"} Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.364780 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"13d25e6cc97365503916485f0c3f37b221ca1f5a5b25b0b145874a06ef214fc4"} Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.393925 4799 scope.go:117] "RemoveContainer" containerID="c3f321bfaa92e5cc62a77dfc2c67710158e95540673cd1e66e556b54e609c988" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.448544 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.465918 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475224 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:50:24 crc kubenswrapper[4799]: E0216 12:50:24.475630 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="config-reloader" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475642 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="config-reloader" Feb 16 12:50:24 crc kubenswrapper[4799]: E0216 12:50:24.475665 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="prometheus" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475671 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="prometheus" Feb 16 12:50:24 crc kubenswrapper[4799]: E0216 12:50:24.475679 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="init-config-reloader" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475687 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="init-config-reloader" Feb 16 12:50:24 crc kubenswrapper[4799]: E0216 12:50:24.475712 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="thanos-sidecar" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475717 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="thanos-sidecar" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475857 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="prometheus" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475875 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="config-reloader" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.475889 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" containerName="thanos-sidecar" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.477345 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.492930 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493142 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493251 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493295 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9r2q7" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493431 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493484 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493493 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.493259 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.498528 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.504559 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522673 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522723 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hks\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-kube-api-access-k2hks\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522752 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522776 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522814 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522848 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522868 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522905 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522927 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3dbdb842-28de-45d4-8706-54b8671c18b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522965 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.522985 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.523015 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.523033 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.550281 4799 scope.go:117] "RemoveContainer" containerID="f233826153818b953c7c0806a3d1aa5f379a1798f7799f53a3b37914e7663993" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.589528 4799 scope.go:117] "RemoveContainer" containerID="0677b2ed4f0c4c4fee9ab7c93aa1d391e2c5ae3c940ee43085ef2f90e92099d2" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.609162 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-86tqv"] Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.624768 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.624836 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3dbdb842-28de-45d4-8706-54b8671c18b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.624915 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.624950 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625006 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625037 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625077 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hks\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-kube-api-access-k2hks\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625162 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625191 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625228 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625265 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.625300 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.626055 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.626639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.630262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.631722 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.634521 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.634568 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d8cc6eee7369a0a6de9fc43cae4068e826e1253c0ec6fd8cae0c234b0f57b7e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.635100 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.635365 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-86tqv"] Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.638488 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3dbdb842-28de-45d4-8706-54b8671c18b7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.639198 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.648282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-config\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.648644 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.648751 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.649110 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.662985 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hks\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-kube-api-access-k2hks\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.696318 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.861983 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:24 crc kubenswrapper[4799]: I0216 12:50:24.867091 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-bs8gq"] Feb 16 12:50:24 crc kubenswrapper[4799]: W0216 12:50:24.874693 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeeaee17_4e43_4847_a859_c623cfb5a2c6.slice/crio-e080e04613cde7b48ff409b1ccd1538620f666871d6b7f606c4e7078e8324738 WatchSource:0}: Error finding container e080e04613cde7b48ff409b1ccd1538620f666871d6b7f606c4e7078e8324738: Status 404 returned error can't find the container with id e080e04613cde7b48ff409b1ccd1538620f666871d6b7f606c4e7078e8324738 Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.167471 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fcb0cb-4efa-4b4e-8945-482b85e9acd1" path="/var/lib/kubelet/pods/32fcb0cb-4efa-4b4e-8945-482b85e9acd1/volumes" Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.169200 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c6ac1b-2c6b-42f1-831c-e98661c6166d" path="/var/lib/kubelet/pods/98c6ac1b-2c6b-42f1-831c-e98661c6166d/volumes" Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.170200 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a520be06-55f1-4803-b8ec-d5fa426da969" path="/var/lib/kubelet/pods/a520be06-55f1-4803-b8ec-d5fa426da969/volumes" Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.340483 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 12:50:25 crc kubenswrapper[4799]: W0216 12:50:25.347950 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dbdb842_28de_45d4_8706_54b8671c18b7.slice/crio-ae586faf6df30d64c02a242650ac74471761e1617f12e71dceaf728f5355e7a3 WatchSource:0}: Error finding container ae586faf6df30d64c02a242650ac74471761e1617f12e71dceaf728f5355e7a3: Status 404 returned error can't find the container with id ae586faf6df30d64c02a242650ac74471761e1617f12e71dceaf728f5355e7a3 Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.381663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerStarted","Data":"ae586faf6df30d64c02a242650ac74471761e1617f12e71dceaf728f5355e7a3"} Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.384529 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"51fd2a906dd0527c0e3b625753d918e2bec80236951a6bfdff1b51e15c8806da"} Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.384558 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"9906d101a99b30fe7602003e5d37b7aa2ea04171576be78775d80e77a6451769"} Feb 16 12:50:25 crc kubenswrapper[4799]: I0216 12:50:25.387224 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-bs8gq" event={"ID":"deeaee17-4e43-4847-a859-c623cfb5a2c6","Type":"ContainerStarted","Data":"e080e04613cde7b48ff409b1ccd1538620f666871d6b7f606c4e7078e8324738"} Feb 16 12:50:26 crc kubenswrapper[4799]: I0216 12:50:26.402637 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"63fdc94d0b3b8f6e885662ad6a5426e10a4aa8c731af92409d35fc237eb3ff46"} Feb 16 12:50:26 crc kubenswrapper[4799]: I0216 12:50:26.408215 4799 generic.go:334] "Generic (PLEG): container finished" podID="deeaee17-4e43-4847-a859-c623cfb5a2c6" containerID="2014761acff9b23d77794ad37caeba7c52f1bab979939ed97275d0e938a0da8b" exitCode=0 Feb 16 12:50:26 crc kubenswrapper[4799]: I0216 12:50:26.408277 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-bs8gq" event={"ID":"deeaee17-4e43-4847-a859-c623cfb5a2c6","Type":"ContainerDied","Data":"2014761acff9b23d77794ad37caeba7c52f1bab979939ed97275d0e938a0da8b"} Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.423213 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"be30b64bd93139ffacb21ab5e20804bcec85c839869a5279648b7212f5b127d4"} Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.423584 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"845f409ecae426af53eb9c27056f6ebf17cac5c2683f2a9f4cca1f885e05c08f"} Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.423598 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"e8471b2c885b15a263d69caa50c3610eb4b9bcfd449e428fc9657d6a422e07fa"} Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.743000 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.884930 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-additional-scripts\") pod \"deeaee17-4e43-4847-a859-c623cfb5a2c6\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885069 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run-ovn\") pod \"deeaee17-4e43-4847-a859-c623cfb5a2c6\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885114 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sdj8\" (UniqueName: \"kubernetes.io/projected/deeaee17-4e43-4847-a859-c623cfb5a2c6-kube-api-access-9sdj8\") pod \"deeaee17-4e43-4847-a859-c623cfb5a2c6\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885215 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "deeaee17-4e43-4847-a859-c623cfb5a2c6" (UID: "deeaee17-4e43-4847-a859-c623cfb5a2c6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885259 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-scripts\") pod \"deeaee17-4e43-4847-a859-c623cfb5a2c6\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885290 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run\") pod \"deeaee17-4e43-4847-a859-c623cfb5a2c6\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885309 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-log-ovn\") pod \"deeaee17-4e43-4847-a859-c623cfb5a2c6\" (UID: \"deeaee17-4e43-4847-a859-c623cfb5a2c6\") " Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885370 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run" (OuterVolumeSpecName: "var-run") pod "deeaee17-4e43-4847-a859-c623cfb5a2c6" (UID: "deeaee17-4e43-4847-a859-c623cfb5a2c6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885503 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "deeaee17-4e43-4847-a859-c623cfb5a2c6" (UID: "deeaee17-4e43-4847-a859-c623cfb5a2c6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885809 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885828 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885841 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/deeaee17-4e43-4847-a859-c623cfb5a2c6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.885842 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "deeaee17-4e43-4847-a859-c623cfb5a2c6" (UID: "deeaee17-4e43-4847-a859-c623cfb5a2c6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.886387 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-scripts" (OuterVolumeSpecName: "scripts") pod "deeaee17-4e43-4847-a859-c623cfb5a2c6" (UID: "deeaee17-4e43-4847-a859-c623cfb5a2c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.987657 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:27 crc kubenswrapper[4799]: I0216 12:50:27.987702 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/deeaee17-4e43-4847-a859-c623cfb5a2c6-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.003179 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deeaee17-4e43-4847-a859-c623cfb5a2c6-kube-api-access-9sdj8" (OuterVolumeSpecName: "kube-api-access-9sdj8") pod "deeaee17-4e43-4847-a859-c623cfb5a2c6" (UID: "deeaee17-4e43-4847-a859-c623cfb5a2c6"). InnerVolumeSpecName "kube-api-access-9sdj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.030406 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j5sct"] Feb 16 12:50:28 crc kubenswrapper[4799]: E0216 12:50:28.031142 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deeaee17-4e43-4847-a859-c623cfb5a2c6" containerName="ovn-config" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.031287 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="deeaee17-4e43-4847-a859-c623cfb5a2c6" containerName="ovn-config" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.031561 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="deeaee17-4e43-4847-a859-c623cfb5a2c6" containerName="ovn-config" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.032372 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.036250 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.039809 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j5sct"] Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.089575 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sdj8\" (UniqueName: \"kubernetes.io/projected/deeaee17-4e43-4847-a859-c623cfb5a2c6-kube-api-access-9sdj8\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.191070 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ddf35d-e8d4-4da5-b526-49abe0912403-operator-scripts\") pod \"root-account-create-update-j5sct\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.191216 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xbb\" (UniqueName: \"kubernetes.io/projected/58ddf35d-e8d4-4da5-b526-49abe0912403-kube-api-access-h8xbb\") pod \"root-account-create-update-j5sct\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.292888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ddf35d-e8d4-4da5-b526-49abe0912403-operator-scripts\") pod \"root-account-create-update-j5sct\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.292993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8xbb\" (UniqueName: \"kubernetes.io/projected/58ddf35d-e8d4-4da5-b526-49abe0912403-kube-api-access-h8xbb\") pod \"root-account-create-update-j5sct\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.294403 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ddf35d-e8d4-4da5-b526-49abe0912403-operator-scripts\") pod \"root-account-create-update-j5sct\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.315040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8xbb\" (UniqueName: \"kubernetes.io/projected/58ddf35d-e8d4-4da5-b526-49abe0912403-kube-api-access-h8xbb\") pod \"root-account-create-update-j5sct\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.351365 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.462560 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerStarted","Data":"c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c"} Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.516842 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"13129e84bb1ec71851c34787decf8985e3578059be3ca257d70ad06d933959dc"} Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.518077 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"20f4076850f83bbe50246b9f608ba8cc29f5a4fd72ddf5f30adbf2987fa2a27b"} Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.518286 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"441259bd3960a1077ca035288068731d1551107dae0f8214d5835dec3b524009"} Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.525741 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-bs8gq" event={"ID":"deeaee17-4e43-4847-a859-c623cfb5a2c6","Type":"ContainerDied","Data":"e080e04613cde7b48ff409b1ccd1538620f666871d6b7f606c4e7078e8324738"} Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.525783 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e080e04613cde7b48ff409b1ccd1538620f666871d6b7f606c4e7078e8324738" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.526036 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-bs8gq" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.821078 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wr6ph-config-bs8gq"] Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.832677 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wr6ph-config-bs8gq"] Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.869089 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wr6ph-config-67l2n"] Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.872262 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.874747 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.889408 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-67l2n"] Feb 16 12:50:28 crc kubenswrapper[4799]: I0216 12:50:28.899503 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j5sct"] Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.012613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.012683 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjf7h\" (UniqueName: \"kubernetes.io/projected/b0aaf6e7-4a12-4815-b655-4c42df40dec9-kube-api-access-pjf7h\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.012744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run-ovn\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.012788 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-scripts\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.012990 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-log-ovn\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.013113 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-additional-scripts\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114446 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114517 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjf7h\" (UniqueName: \"kubernetes.io/projected/b0aaf6e7-4a12-4815-b655-4c42df40dec9-kube-api-access-pjf7h\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114553 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run-ovn\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114577 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-scripts\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-log-ovn\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114704 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-additional-scripts\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114776 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114853 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-log-ovn\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.114886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run-ovn\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.115430 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-additional-scripts\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.116761 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-scripts\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.159516 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deeaee17-4e43-4847-a859-c623cfb5a2c6" path="/var/lib/kubelet/pods/deeaee17-4e43-4847-a859-c623cfb5a2c6/volumes" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.166782 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjf7h\" (UniqueName: \"kubernetes.io/projected/b0aaf6e7-4a12-4815-b655-4c42df40dec9-kube-api-access-pjf7h\") pod \"ovn-controller-wr6ph-config-67l2n\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.191561 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.543963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"ce081a89cabac5b1c6300d2fd2de9b05ccd1d41949d42976ff8b632819480ce3"} Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.544548 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"1dc9bc7c3f131c5b7776a2e491b039d13765ff0cadad58fee124639cd4dd1798"} Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.544563 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"57d1d363355ad87a1e2d970616cded58eb747701eee8767d46c368a49462b812"} Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.544576 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"95bfd980-54e7-4b29-a896-dc1cc52291fd","Type":"ContainerStarted","Data":"261f17054d5210b3c5766254fcbe02a66091a0620a34665f7355579b084156d3"} Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.551959 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5sct" event={"ID":"58ddf35d-e8d4-4da5-b526-49abe0912403","Type":"ContainerStarted","Data":"a934223813f802c99ad4347046828761b2fd1fa3074d997cba0b6034cc4c14c8"} Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.552002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5sct" event={"ID":"58ddf35d-e8d4-4da5-b526-49abe0912403","Type":"ContainerStarted","Data":"4778d06c386269587b9abaa8fb98318baf459640b56c8d94d1f34c30bebad4c7"} Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.586851 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.1341486 podStartE2EDuration="41.586834115s" podCreationTimestamp="2026-02-16 12:49:48 +0000 UTC" firstStartedPulling="2026-02-16 12:50:23.093314927 +0000 UTC m=+1128.686330261" lastFinishedPulling="2026-02-16 12:50:27.546000442 +0000 UTC m=+1133.139015776" observedRunningTime="2026-02-16 12:50:29.582390516 +0000 UTC m=+1135.175405850" watchObservedRunningTime="2026-02-16 12:50:29.586834115 +0000 UTC m=+1135.179849449" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.727534 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j5sct" podStartSLOduration=1.727516394 podStartE2EDuration="1.727516394s" podCreationTimestamp="2026-02-16 12:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:29.610714627 +0000 UTC m=+1135.203729961" watchObservedRunningTime="2026-02-16 12:50:29.727516394 +0000 UTC m=+1135.320531728" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.728832 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-67l2n"] Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.871610 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b5c968d55-8gh88"] Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.873269 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.875537 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 12:50:29 crc kubenswrapper[4799]: I0216 12:50:29.903822 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b5c968d55-8gh88"] Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.031263 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-swift-storage-0\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.031375 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-config\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.031410 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.031437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.031459 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f6h2\" (UniqueName: \"kubernetes.io/projected/92c214b0-98ba-493c-a0eb-e465a172f9f7-kube-api-access-8f6h2\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.031540 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-svc\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.133453 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-config\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.133495 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.133516 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.133537 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f6h2\" (UniqueName: \"kubernetes.io/projected/92c214b0-98ba-493c-a0eb-e465a172f9f7-kube-api-access-8f6h2\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.133589 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-svc\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.133654 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-swift-storage-0\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.134478 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.134798 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-config\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.134825 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.134845 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-swift-storage-0\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.135200 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-svc\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.160652 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f6h2\" (UniqueName: \"kubernetes.io/projected/92c214b0-98ba-493c-a0eb-e465a172f9f7-kube-api-access-8f6h2\") pod \"dnsmasq-dns-7b5c968d55-8gh88\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.192339 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.487854 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b5c968d55-8gh88"] Feb 16 12:50:30 crc kubenswrapper[4799]: W0216 12:50:30.491632 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c214b0_98ba_493c_a0eb_e465a172f9f7.slice/crio-336d3b54bef9d013797e34f2703596b600cac47e1169fd3f78337b496b7d957e WatchSource:0}: Error finding container 336d3b54bef9d013797e34f2703596b600cac47e1169fd3f78337b496b7d957e: Status 404 returned error can't find the container with id 336d3b54bef9d013797e34f2703596b600cac47e1169fd3f78337b496b7d957e Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.564176 4799 generic.go:334] "Generic (PLEG): container finished" podID="b0aaf6e7-4a12-4815-b655-4c42df40dec9" containerID="c8f6d15b16d49252fe7dfceef2ed17ed454d46659fc368ae56d30c92e2bb5889" exitCode=0 Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.564309 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-67l2n" event={"ID":"b0aaf6e7-4a12-4815-b655-4c42df40dec9","Type":"ContainerDied","Data":"c8f6d15b16d49252fe7dfceef2ed17ed454d46659fc368ae56d30c92e2bb5889"} Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.564345 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-67l2n" event={"ID":"b0aaf6e7-4a12-4815-b655-4c42df40dec9","Type":"ContainerStarted","Data":"1a59a844ffd35079cfb7c55affaf013ec47c7547a397d70882b3d81711d46926"} Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.565303 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" event={"ID":"92c214b0-98ba-493c-a0eb-e465a172f9f7","Type":"ContainerStarted","Data":"336d3b54bef9d013797e34f2703596b600cac47e1169fd3f78337b496b7d957e"} Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.567185 4799 generic.go:334] "Generic (PLEG): container finished" podID="58ddf35d-e8d4-4da5-b526-49abe0912403" containerID="a934223813f802c99ad4347046828761b2fd1fa3074d997cba0b6034cc4c14c8" exitCode=0 Feb 16 12:50:30 crc kubenswrapper[4799]: I0216 12:50:30.567248 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5sct" event={"ID":"58ddf35d-e8d4-4da5-b526-49abe0912403","Type":"ContainerDied","Data":"a934223813f802c99ad4347046828761b2fd1fa3074d997cba0b6034cc4c14c8"} Feb 16 12:50:31 crc kubenswrapper[4799]: I0216 12:50:31.577083 4799 generic.go:334] "Generic (PLEG): container finished" podID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerID="f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f" exitCode=0 Feb 16 12:50:31 crc kubenswrapper[4799]: I0216 12:50:31.579208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" event={"ID":"92c214b0-98ba-493c-a0eb-e465a172f9f7","Type":"ContainerDied","Data":"f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f"} Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.012729 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.020548 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.168949 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ddf35d-e8d4-4da5-b526-49abe0912403-operator-scripts\") pod \"58ddf35d-e8d4-4da5-b526-49abe0912403\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169079 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjf7h\" (UniqueName: \"kubernetes.io/projected/b0aaf6e7-4a12-4815-b655-4c42df40dec9-kube-api-access-pjf7h\") pod \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169117 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8xbb\" (UniqueName: \"kubernetes.io/projected/58ddf35d-e8d4-4da5-b526-49abe0912403-kube-api-access-h8xbb\") pod \"58ddf35d-e8d4-4da5-b526-49abe0912403\" (UID: \"58ddf35d-e8d4-4da5-b526-49abe0912403\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-log-ovn\") pod \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169303 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run\") pod \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169387 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-scripts\") pod \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169418 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-additional-scripts\") pod \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169448 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run-ovn\") pod \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\" (UID: \"b0aaf6e7-4a12-4815-b655-4c42df40dec9\") " Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.169906 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b0aaf6e7-4a12-4815-b655-4c42df40dec9" (UID: "b0aaf6e7-4a12-4815-b655-4c42df40dec9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.170453 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ddf35d-e8d4-4da5-b526-49abe0912403-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58ddf35d-e8d4-4da5-b526-49abe0912403" (UID: "58ddf35d-e8d4-4da5-b526-49abe0912403"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.171053 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run" (OuterVolumeSpecName: "var-run") pod "b0aaf6e7-4a12-4815-b655-4c42df40dec9" (UID: "b0aaf6e7-4a12-4815-b655-4c42df40dec9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.171146 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b0aaf6e7-4a12-4815-b655-4c42df40dec9" (UID: "b0aaf6e7-4a12-4815-b655-4c42df40dec9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.171726 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b0aaf6e7-4a12-4815-b655-4c42df40dec9" (UID: "b0aaf6e7-4a12-4815-b655-4c42df40dec9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.172716 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-scripts" (OuterVolumeSpecName: "scripts") pod "b0aaf6e7-4a12-4815-b655-4c42df40dec9" (UID: "b0aaf6e7-4a12-4815-b655-4c42df40dec9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.175600 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0aaf6e7-4a12-4815-b655-4c42df40dec9-kube-api-access-pjf7h" (OuterVolumeSpecName: "kube-api-access-pjf7h") pod "b0aaf6e7-4a12-4815-b655-4c42df40dec9" (UID: "b0aaf6e7-4a12-4815-b655-4c42df40dec9"). InnerVolumeSpecName "kube-api-access-pjf7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.175727 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ddf35d-e8d4-4da5-b526-49abe0912403-kube-api-access-h8xbb" (OuterVolumeSpecName: "kube-api-access-h8xbb") pod "58ddf35d-e8d4-4da5-b526-49abe0912403" (UID: "58ddf35d-e8d4-4da5-b526-49abe0912403"). InnerVolumeSpecName "kube-api-access-h8xbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.272814 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274618 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274648 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0aaf6e7-4a12-4815-b655-4c42df40dec9-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274672 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274698 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58ddf35d-e8d4-4da5-b526-49abe0912403-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274721 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjf7h\" (UniqueName: \"kubernetes.io/projected/b0aaf6e7-4a12-4815-b655-4c42df40dec9-kube-api-access-pjf7h\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274745 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8xbb\" (UniqueName: \"kubernetes.io/projected/58ddf35d-e8d4-4da5-b526-49abe0912403-kube-api-access-h8xbb\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.274769 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0aaf6e7-4a12-4815-b655-4c42df40dec9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.579396 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.591534 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j5sct" event={"ID":"58ddf35d-e8d4-4da5-b526-49abe0912403","Type":"ContainerDied","Data":"4778d06c386269587b9abaa8fb98318baf459640b56c8d94d1f34c30bebad4c7"} Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.591583 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4778d06c386269587b9abaa8fb98318baf459640b56c8d94d1f34c30bebad4c7" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.591656 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j5sct" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.606343 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-67l2n" event={"ID":"b0aaf6e7-4a12-4815-b655-4c42df40dec9","Type":"ContainerDied","Data":"1a59a844ffd35079cfb7c55affaf013ec47c7547a397d70882b3d81711d46926"} Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.606628 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a59a844ffd35079cfb7c55affaf013ec47c7547a397d70882b3d81711d46926" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.606376 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-67l2n" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.611199 4799 generic.go:334] "Generic (PLEG): container finished" podID="ff79791d-f33a-4986-9dd4-67c6af5bf747" containerID="9d36bcf0e9b91e3d6eefd123eee0031ce1c5f0a0aa56b88ef64d8673381beb5f" exitCode=0 Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.611285 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxmd5" event={"ID":"ff79791d-f33a-4986-9dd4-67c6af5bf747","Type":"ContainerDied","Data":"9d36bcf0e9b91e3d6eefd123eee0031ce1c5f0a0aa56b88ef64d8673381beb5f"} Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.621388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" event={"ID":"92c214b0-98ba-493c-a0eb-e465a172f9f7","Type":"ContainerStarted","Data":"ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd"} Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.621560 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.681980 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" podStartSLOduration=3.681964141 podStartE2EDuration="3.681964141s" podCreationTimestamp="2026-02-16 12:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:32.676884814 +0000 UTC m=+1138.269900168" watchObservedRunningTime="2026-02-16 12:50:32.681964141 +0000 UTC m=+1138.274979475" Feb 16 12:50:32 crc kubenswrapper[4799]: I0216 12:50:32.903895 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.141159 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dvv2w"] Feb 16 12:50:33 crc kubenswrapper[4799]: E0216 12:50:33.141528 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0aaf6e7-4a12-4815-b655-4c42df40dec9" containerName="ovn-config" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.141547 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0aaf6e7-4a12-4815-b655-4c42df40dec9" containerName="ovn-config" Feb 16 12:50:33 crc kubenswrapper[4799]: E0216 12:50:33.141596 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ddf35d-e8d4-4da5-b526-49abe0912403" containerName="mariadb-account-create-update" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.141607 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ddf35d-e8d4-4da5-b526-49abe0912403" containerName="mariadb-account-create-update" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.141870 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0aaf6e7-4a12-4815-b655-4c42df40dec9" containerName="ovn-config" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.141895 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ddf35d-e8d4-4da5-b526-49abe0912403" containerName="mariadb-account-create-update" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.143047 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.162795 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dvv2w"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.176270 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wr6ph-config-67l2n"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.191175 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043950ea-86bb-464a-b829-8816123fe1cd-operator-scripts\") pod \"cinder-db-create-dvv2w\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.191236 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhp2\" (UniqueName: \"kubernetes.io/projected/043950ea-86bb-464a-b829-8816123fe1cd-kube-api-access-4xhp2\") pod \"cinder-db-create-dvv2w\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.207906 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.212186 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wr6ph-config-67l2n"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.299149 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043950ea-86bb-464a-b829-8816123fe1cd-operator-scripts\") pod \"cinder-db-create-dvv2w\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.299215 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xhp2\" (UniqueName: \"kubernetes.io/projected/043950ea-86bb-464a-b829-8816123fe1cd-kube-api-access-4xhp2\") pod \"cinder-db-create-dvv2w\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.303784 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043950ea-86bb-464a-b829-8816123fe1cd-operator-scripts\") pod \"cinder-db-create-dvv2w\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.304101 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1745-account-create-update-h4zdh"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.305897 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.313236 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1745-account-create-update-h4zdh"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.329961 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.349791 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wr6ph-config-hlxms"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.350364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xhp2\" (UniqueName: \"kubernetes.io/projected/043950ea-86bb-464a-b829-8816123fe1cd-kube-api-access-4xhp2\") pod \"cinder-db-create-dvv2w\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.351861 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.365859 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.370873 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-hlxms"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401571 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ccf\" (UniqueName: \"kubernetes.io/projected/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-kube-api-access-d9ccf\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401645 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdw4\" (UniqueName: \"kubernetes.io/projected/201c84bc-cc45-471a-a86c-fe79ab2a2174-kube-api-access-xhdw4\") pod \"cinder-1745-account-create-update-h4zdh\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401669 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201c84bc-cc45-471a-a86c-fe79ab2a2174-operator-scripts\") pod \"cinder-1745-account-create-update-h4zdh\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401700 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-log-ovn\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401729 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-scripts\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401761 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401809 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-additional-scripts\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.401841 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run-ovn\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.429291 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-l2t2n"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.430435 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.446460 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8scmb"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.447955 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.454965 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.455165 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.455302 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.455446 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqmr2" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.464172 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l2t2n"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.474198 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8scmb"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.484475 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504721 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-additional-scripts\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504794 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run-ovn\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504837 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591accad-9c4d-4e29-bdf9-d673ed928210-operator-scripts\") pod \"barbican-db-create-l2t2n\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504894 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ccf\" (UniqueName: \"kubernetes.io/projected/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-kube-api-access-d9ccf\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504919 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxblx\" (UniqueName: \"kubernetes.io/projected/591accad-9c4d-4e29-bdf9-d673ed928210-kube-api-access-mxblx\") pod \"barbican-db-create-l2t2n\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504953 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-combined-ca-bundle\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.504994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqvn\" (UniqueName: \"kubernetes.io/projected/1144c46a-41c9-4032-8811-2b3c930586f9-kube-api-access-cjqvn\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505028 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhdw4\" (UniqueName: \"kubernetes.io/projected/201c84bc-cc45-471a-a86c-fe79ab2a2174-kube-api-access-xhdw4\") pod \"cinder-1745-account-create-update-h4zdh\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505061 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201c84bc-cc45-471a-a86c-fe79ab2a2174-operator-scripts\") pod \"cinder-1745-account-create-update-h4zdh\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505105 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-log-ovn\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505160 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-config-data\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505188 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-scripts\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505226 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505533 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.505919 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run-ovn\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.506302 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-additional-scripts\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.506377 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-log-ovn\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.507048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201c84bc-cc45-471a-a86c-fe79ab2a2174-operator-scripts\") pod \"cinder-1745-account-create-update-h4zdh\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.523588 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-scripts\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.550790 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2a53-account-create-update-zc78k"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.552083 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.554336 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ccf\" (UniqueName: \"kubernetes.io/projected/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-kube-api-access-d9ccf\") pod \"ovn-controller-wr6ph-config-hlxms\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.556012 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.556633 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2a53-account-create-update-zc78k"] Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.569822 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhdw4\" (UniqueName: \"kubernetes.io/projected/201c84bc-cc45-471a-a86c-fe79ab2a2174-kube-api-access-xhdw4\") pod \"cinder-1745-account-create-update-h4zdh\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591accad-9c4d-4e29-bdf9-d673ed928210-operator-scripts\") pod \"barbican-db-create-l2t2n\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607674 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxblx\" (UniqueName: \"kubernetes.io/projected/591accad-9c4d-4e29-bdf9-d673ed928210-kube-api-access-mxblx\") pod \"barbican-db-create-l2t2n\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607710 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-combined-ca-bundle\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607750 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430af87d-ae1f-4b73-93e7-d8aa93192ae5-operator-scripts\") pod \"barbican-2a53-account-create-update-zc78k\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607780 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjqvn\" (UniqueName: \"kubernetes.io/projected/1144c46a-41c9-4032-8811-2b3c930586f9-kube-api-access-cjqvn\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607805 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc4gl\" (UniqueName: \"kubernetes.io/projected/430af87d-ae1f-4b73-93e7-d8aa93192ae5-kube-api-access-jc4gl\") pod \"barbican-2a53-account-create-update-zc78k\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.607908 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-config-data\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.610588 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591accad-9c4d-4e29-bdf9-d673ed928210-operator-scripts\") pod \"barbican-db-create-l2t2n\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.616541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-combined-ca-bundle\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.616703 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-config-data\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.640960 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxblx\" (UniqueName: \"kubernetes.io/projected/591accad-9c4d-4e29-bdf9-d673ed928210-kube-api-access-mxblx\") pod \"barbican-db-create-l2t2n\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.646002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjqvn\" (UniqueName: \"kubernetes.io/projected/1144c46a-41c9-4032-8811-2b3c930586f9-kube-api-access-cjqvn\") pod \"keystone-db-sync-8scmb\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.695155 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.711702 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430af87d-ae1f-4b73-93e7-d8aa93192ae5-operator-scripts\") pod \"barbican-2a53-account-create-update-zc78k\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.711747 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc4gl\" (UniqueName: \"kubernetes.io/projected/430af87d-ae1f-4b73-93e7-d8aa93192ae5-kube-api-access-jc4gl\") pod \"barbican-2a53-account-create-update-zc78k\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.712584 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430af87d-ae1f-4b73-93e7-d8aa93192ae5-operator-scripts\") pod \"barbican-2a53-account-create-update-zc78k\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.713261 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.732945 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc4gl\" (UniqueName: \"kubernetes.io/projected/430af87d-ae1f-4b73-93e7-d8aa93192ae5-kube-api-access-jc4gl\") pod \"barbican-2a53-account-create-update-zc78k\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.759608 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:33 crc kubenswrapper[4799]: I0216 12:50:33.770204 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:34 crc kubenswrapper[4799]: I0216 12:50:34.024542 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:34 crc kubenswrapper[4799]: I0216 12:50:34.083567 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dvv2w"] Feb 16 12:50:34 crc kubenswrapper[4799]: I0216 12:50:34.423284 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-l2t2n"] Feb 16 12:50:34 crc kubenswrapper[4799]: W0216 12:50:34.433973 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod591accad_9c4d_4e29_bdf9_d673ed928210.slice/crio-700f3358d42098b844ff5844ad919c1f9d9cbc50bfa3194b8702c0f6d168ad82 WatchSource:0}: Error finding container 700f3358d42098b844ff5844ad919c1f9d9cbc50bfa3194b8702c0f6d168ad82: Status 404 returned error can't find the container with id 700f3358d42098b844ff5844ad919c1f9d9cbc50bfa3194b8702c0f6d168ad82 Feb 16 12:50:34 crc kubenswrapper[4799]: I0216 12:50:34.436487 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8scmb"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.549344 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.566046 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1745-account-create-update-h4zdh"] Feb 16 12:50:35 crc kubenswrapper[4799]: W0216 12:50:34.586576 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod201c84bc_cc45_471a_a86c_fe79ab2a2174.slice/crio-695ed2313dd4b29b04be313f4edb04ab5c54147c5c8f5ab11cc39e3619dc1513 WatchSource:0}: Error finding container 695ed2313dd4b29b04be313f4edb04ab5c54147c5c8f5ab11cc39e3619dc1513: Status 404 returned error can't find the container with id 695ed2313dd4b29b04be313f4edb04ab5c54147c5c8f5ab11cc39e3619dc1513 Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.628497 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-combined-ca-bundle\") pod \"ff79791d-f33a-4986-9dd4-67c6af5bf747\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.628649 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-db-sync-config-data\") pod \"ff79791d-f33a-4986-9dd4-67c6af5bf747\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.628833 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-config-data\") pod \"ff79791d-f33a-4986-9dd4-67c6af5bf747\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.628903 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzxnj\" (UniqueName: \"kubernetes.io/projected/ff79791d-f33a-4986-9dd4-67c6af5bf747-kube-api-access-nzxnj\") pod \"ff79791d-f33a-4986-9dd4-67c6af5bf747\" (UID: \"ff79791d-f33a-4986-9dd4-67c6af5bf747\") " Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.645948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff79791d-f33a-4986-9dd4-67c6af5bf747-kube-api-access-nzxnj" (OuterVolumeSpecName: "kube-api-access-nzxnj") pod "ff79791d-f33a-4986-9dd4-67c6af5bf747" (UID: "ff79791d-f33a-4986-9dd4-67c6af5bf747"). InnerVolumeSpecName "kube-api-access-nzxnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.658026 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ff79791d-f33a-4986-9dd4-67c6af5bf747" (UID: "ff79791d-f33a-4986-9dd4-67c6af5bf747"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.678682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1745-account-create-update-h4zdh" event={"ID":"201c84bc-cc45-471a-a86c-fe79ab2a2174","Type":"ContainerStarted","Data":"695ed2313dd4b29b04be313f4edb04ab5c54147c5c8f5ab11cc39e3619dc1513"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.686644 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-hlxms"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.687150 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8scmb" event={"ID":"1144c46a-41c9-4032-8811-2b3c930586f9","Type":"ContainerStarted","Data":"faf68876814b249b04089e3db8bc4ac7d3dcad1f4a63bcbb0f77bc22d823bccf"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.710796 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l2t2n" event={"ID":"591accad-9c4d-4e29-bdf9-d673ed928210","Type":"ContainerStarted","Data":"700f3358d42098b844ff5844ad919c1f9d9cbc50bfa3194b8702c0f6d168ad82"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.714893 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j5sct"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.727370 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j5sct"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.731109 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.731555 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzxnj\" (UniqueName: \"kubernetes.io/projected/ff79791d-f33a-4986-9dd4-67c6af5bf747-kube-api-access-nzxnj\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.731511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff79791d-f33a-4986-9dd4-67c6af5bf747" (UID: "ff79791d-f33a-4986-9dd4-67c6af5bf747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.738437 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2a53-account-create-update-zc78k"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.744345 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxmd5" event={"ID":"ff79791d-f33a-4986-9dd4-67c6af5bf747","Type":"ContainerDied","Data":"6cf91dfd0163e7059c3c9a7cfd99a8a9ad02792d64617dedccb184cdfb298121"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.744381 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf91dfd0163e7059c3c9a7cfd99a8a9ad02792d64617dedccb184cdfb298121" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.744617 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxmd5" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.766360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvv2w" event={"ID":"043950ea-86bb-464a-b829-8816123fe1cd","Type":"ContainerStarted","Data":"c7ac133f56cdafcd3d9cd734cffc7c6b5986212bfe920dfbea4cc4a49954355f"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.766399 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvv2w" event={"ID":"043950ea-86bb-464a-b829-8816123fe1cd","Type":"ContainerStarted","Data":"efa75c66854df7696259ec2ab966ca7cd849339326e17e2a4df79e726f8530f6"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.771874 4799 generic.go:334] "Generic (PLEG): container finished" podID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerID="c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c" exitCode=0 Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.771920 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerDied","Data":"c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.789334 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-config-data" (OuterVolumeSpecName: "config-data") pod "ff79791d-f33a-4986-9dd4-67c6af5bf747" (UID: "ff79791d-f33a-4986-9dd4-67c6af5bf747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.844047 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.844075 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff79791d-f33a-4986-9dd4-67c6af5bf747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:34.881201 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-dvv2w" podStartSLOduration=1.881149317 podStartE2EDuration="1.881149317s" podCreationTimestamp="2026-02-16 12:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:34.804460033 +0000 UTC m=+1140.397475367" watchObservedRunningTime="2026-02-16 12:50:34.881149317 +0000 UTC m=+1140.474164651" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.194801 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ddf35d-e8d4-4da5-b526-49abe0912403" path="/var/lib/kubelet/pods/58ddf35d-e8d4-4da5-b526-49abe0912403/volumes" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.195996 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0aaf6e7-4a12-4815-b655-4c42df40dec9" path="/var/lib/kubelet/pods/b0aaf6e7-4a12-4815-b655-4c42df40dec9/volumes" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.492883 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4hjhh"] Feb 16 12:50:35 crc kubenswrapper[4799]: E0216 12:50:35.493303 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff79791d-f33a-4986-9dd4-67c6af5bf747" containerName="glance-db-sync" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.493315 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff79791d-f33a-4986-9dd4-67c6af5bf747" containerName="glance-db-sync" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.493547 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff79791d-f33a-4986-9dd4-67c6af5bf747" containerName="glance-db-sync" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.494101 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.513682 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4hjhh"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.582990 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqw9w\" (UniqueName: \"kubernetes.io/projected/f90f7da9-52e1-4369-a123-145ec31299db-kube-api-access-rqw9w\") pod \"neutron-db-create-4hjhh\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.583036 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90f7da9-52e1-4369-a123-145ec31299db-operator-scripts\") pod \"neutron-db-create-4hjhh\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.669799 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b5c968d55-8gh88"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.670645 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerName="dnsmasq-dns" containerID="cri-o://ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd" gracePeriod=10 Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.684647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90f7da9-52e1-4369-a123-145ec31299db-operator-scripts\") pod \"neutron-db-create-4hjhh\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.685451 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90f7da9-52e1-4369-a123-145ec31299db-operator-scripts\") pod \"neutron-db-create-4hjhh\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.685762 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqw9w\" (UniqueName: \"kubernetes.io/projected/f90f7da9-52e1-4369-a123-145ec31299db-kube-api-access-rqw9w\") pod \"neutron-db-create-4hjhh\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.766395 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqw9w\" (UniqueName: \"kubernetes.io/projected/f90f7da9-52e1-4369-a123-145ec31299db-kube-api-access-rqw9w\") pod \"neutron-db-create-4hjhh\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.767207 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-87bb67d67-4q44z"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.768783 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.792083 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-2x28q"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.793229 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.804420 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-hlxms" event={"ID":"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec","Type":"ContainerStarted","Data":"d987869da5b213cb51cfa1c20220b0fbef427682fe91ce51c0565b6e407d6750"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.810751 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l2t2n" event={"ID":"591accad-9c4d-4e29-bdf9-d673ed928210","Type":"ContainerStarted","Data":"b4fba0d86739ea19470d37843f86d53dee7132cf6e59e5e24e668b8e835c12b6"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.820962 4799 generic.go:334] "Generic (PLEG): container finished" podID="043950ea-86bb-464a-b829-8816123fe1cd" containerID="c7ac133f56cdafcd3d9cd734cffc7c6b5986212bfe920dfbea4cc4a49954355f" exitCode=0 Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.821045 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvv2w" event={"ID":"043950ea-86bb-464a-b829-8816123fe1cd","Type":"ContainerDied","Data":"c7ac133f56cdafcd3d9cd734cffc7c6b5986212bfe920dfbea4cc4a49954355f"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.830590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerStarted","Data":"950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.840760 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-rwt49" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.840941 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.848647 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-2x28q"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.848658 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.853802 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2a53-account-create-update-zc78k" event={"ID":"430af87d-ae1f-4b73-93e7-d8aa93192ae5","Type":"ContainerStarted","Data":"60424934eccdb7ea226c0c4e856a1cd68af7617cbb4e2361bbf6f10f0c951a6e"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.853849 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2a53-account-create-update-zc78k" event={"ID":"430af87d-ae1f-4b73-93e7-d8aa93192ae5","Type":"ContainerStarted","Data":"8d68fcc3001b62147efb68f3ee997ea74eb2c60ca548f209884f9ae21c795fb3"} Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890206 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890455 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-db-sync-config-data\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890541 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4km\" (UniqueName: \"kubernetes.io/projected/c52c4130-5b91-4e36-ad36-8333675ee0a4-kube-api-access-cw4km\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890561 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-config\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890578 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-combined-ca-bundle\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890623 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890646 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvhfl\" (UniqueName: \"kubernetes.io/projected/762cb41d-d3c9-4b97-bdbf-7062f65fba96-kube-api-access-pvhfl\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890671 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-config-data\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.890704 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-svc\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.903453 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87bb67d67-4q44z"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.972730 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4e39-account-create-update-kwmxq"] Feb 16 12:50:35 crc kubenswrapper[4799]: I0216 12:50:35.973866 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.019995 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-config-data\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020071 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-svc\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020170 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020219 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-db-sync-config-data\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020244 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw4km\" (UniqueName: \"kubernetes.io/projected/c52c4130-5b91-4e36-ad36-8333675ee0a4-kube-api-access-cw4km\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020267 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-config\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020284 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-combined-ca-bundle\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020339 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.020374 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvhfl\" (UniqueName: \"kubernetes.io/projected/762cb41d-d3c9-4b97-bdbf-7062f65fba96-kube-api-access-pvhfl\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.022238 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.033248 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-svc\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.034260 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-config-data\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.034792 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-config\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.035104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.040721 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.042144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.057003 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-db-sync-config-data\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.058442 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4e39-account-create-update-kwmxq"] Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.082309 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-combined-ca-bundle\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.082862 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw4km\" (UniqueName: \"kubernetes.io/projected/c52c4130-5b91-4e36-ad36-8333675ee0a4-kube-api-access-cw4km\") pod \"dnsmasq-dns-87bb67d67-4q44z\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.118841 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2a53-account-create-update-zc78k" podStartSLOduration=3.118821927 podStartE2EDuration="3.118821927s" podCreationTimestamp="2026-02-16 12:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:36.117778887 +0000 UTC m=+1141.710794221" watchObservedRunningTime="2026-02-16 12:50:36.118821927 +0000 UTC m=+1141.711837261" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.123610 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkzg\" (UniqueName: \"kubernetes.io/projected/f783521e-3e89-4fc3-bdb6-08bc1ee82739-kube-api-access-wfkzg\") pod \"neutron-4e39-account-create-update-kwmxq\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.123776 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f783521e-3e89-4fc3-bdb6-08bc1ee82739-operator-scripts\") pod \"neutron-4e39-account-create-update-kwmxq\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.133181 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvhfl\" (UniqueName: \"kubernetes.io/projected/762cb41d-d3c9-4b97-bdbf-7062f65fba96-kube-api-access-pvhfl\") pod \"watcher-db-sync-2x28q\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.159266 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.182787 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.231240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f783521e-3e89-4fc3-bdb6-08bc1ee82739-operator-scripts\") pod \"neutron-4e39-account-create-update-kwmxq\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.231328 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfkzg\" (UniqueName: \"kubernetes.io/projected/f783521e-3e89-4fc3-bdb6-08bc1ee82739-kube-api-access-wfkzg\") pod \"neutron-4e39-account-create-update-kwmxq\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.232639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f783521e-3e89-4fc3-bdb6-08bc1ee82739-operator-scripts\") pod \"neutron-4e39-account-create-update-kwmxq\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.258095 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfkzg\" (UniqueName: \"kubernetes.io/projected/f783521e-3e89-4fc3-bdb6-08bc1ee82739-kube-api-access-wfkzg\") pod \"neutron-4e39-account-create-update-kwmxq\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.423124 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.764761 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.811231 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-l2t2n" podStartSLOduration=3.811208596 podStartE2EDuration="3.811208596s" podCreationTimestamp="2026-02-16 12:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:36.261164985 +0000 UTC m=+1141.854180329" watchObservedRunningTime="2026-02-16 12:50:36.811208596 +0000 UTC m=+1142.404223930" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.836241 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4hjhh"] Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.885513 4799 generic.go:334] "Generic (PLEG): container finished" podID="430af87d-ae1f-4b73-93e7-d8aa93192ae5" containerID="60424934eccdb7ea226c0c4e856a1cd68af7617cbb4e2361bbf6f10f0c951a6e" exitCode=0 Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.885604 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2a53-account-create-update-zc78k" event={"ID":"430af87d-ae1f-4b73-93e7-d8aa93192ae5","Type":"ContainerDied","Data":"60424934eccdb7ea226c0c4e856a1cd68af7617cbb4e2361bbf6f10f0c951a6e"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.893260 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1745-account-create-update-h4zdh" event={"ID":"201c84bc-cc45-471a-a86c-fe79ab2a2174","Type":"ContainerStarted","Data":"3e9c09e275a75b9cafa94a282b53925801b0763f43ffbbfe5048316f689fe85b"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.898061 4799 generic.go:334] "Generic (PLEG): container finished" podID="b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" containerID="5e12f56f901c2378eea66964f27f2075afc1c1fd980d86e35efff241a2c443b5" exitCode=0 Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.898249 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-hlxms" event={"ID":"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec","Type":"ContainerDied","Data":"5e12f56f901c2378eea66964f27f2075afc1c1fd980d86e35efff241a2c443b5"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.906114 4799 generic.go:334] "Generic (PLEG): container finished" podID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerID="ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd" exitCode=0 Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.906194 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" event={"ID":"92c214b0-98ba-493c-a0eb-e465a172f9f7","Type":"ContainerDied","Data":"ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.906224 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" event={"ID":"92c214b0-98ba-493c-a0eb-e465a172f9f7","Type":"ContainerDied","Data":"336d3b54bef9d013797e34f2703596b600cac47e1169fd3f78337b496b7d957e"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.906240 4799 scope.go:117] "RemoveContainer" containerID="ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.906362 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b5c968d55-8gh88" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.909509 4799 generic.go:334] "Generic (PLEG): container finished" podID="591accad-9c4d-4e29-bdf9-d673ed928210" containerID="b4fba0d86739ea19470d37843f86d53dee7132cf6e59e5e24e668b8e835c12b6" exitCode=0 Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.909551 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l2t2n" event={"ID":"591accad-9c4d-4e29-bdf9-d673ed928210","Type":"ContainerDied","Data":"b4fba0d86739ea19470d37843f86d53dee7132cf6e59e5e24e668b8e835c12b6"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.911919 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4hjhh" event={"ID":"f90f7da9-52e1-4369-a123-145ec31299db","Type":"ContainerStarted","Data":"7b51f1cf029e51422e694bfe6ee907086cba1b75f9941436c4b29f6df4cdeb32"} Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.941496 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-1745-account-create-update-h4zdh" podStartSLOduration=3.941472244 podStartE2EDuration="3.941472244s" podCreationTimestamp="2026-02-16 12:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:36.928420736 +0000 UTC m=+1142.521436070" watchObservedRunningTime="2026-02-16 12:50:36.941472244 +0000 UTC m=+1142.534487578" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951488 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-config\") pod \"92c214b0-98ba-493c-a0eb-e465a172f9f7\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951547 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f6h2\" (UniqueName: \"kubernetes.io/projected/92c214b0-98ba-493c-a0eb-e465a172f9f7-kube-api-access-8f6h2\") pod \"92c214b0-98ba-493c-a0eb-e465a172f9f7\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951572 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-nb\") pod \"92c214b0-98ba-493c-a0eb-e465a172f9f7\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951631 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-sb\") pod \"92c214b0-98ba-493c-a0eb-e465a172f9f7\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951721 4799 scope.go:117] "RemoveContainer" containerID="f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f" Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951763 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-swift-storage-0\") pod \"92c214b0-98ba-493c-a0eb-e465a172f9f7\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " Feb 16 12:50:36 crc kubenswrapper[4799]: I0216 12:50:36.951785 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-svc\") pod \"92c214b0-98ba-493c-a0eb-e465a172f9f7\" (UID: \"92c214b0-98ba-493c-a0eb-e465a172f9f7\") " Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.012456 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c214b0-98ba-493c-a0eb-e465a172f9f7-kube-api-access-8f6h2" (OuterVolumeSpecName: "kube-api-access-8f6h2") pod "92c214b0-98ba-493c-a0eb-e465a172f9f7" (UID: "92c214b0-98ba-493c-a0eb-e465a172f9f7"). InnerVolumeSpecName "kube-api-access-8f6h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.030306 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4e39-account-create-update-kwmxq"] Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.068960 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f6h2\" (UniqueName: \"kubernetes.io/projected/92c214b0-98ba-493c-a0eb-e465a172f9f7-kube-api-access-8f6h2\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.083793 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87bb67d67-4q44z"] Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.115593 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-config" (OuterVolumeSpecName: "config") pod "92c214b0-98ba-493c-a0eb-e465a172f9f7" (UID: "92c214b0-98ba-493c-a0eb-e465a172f9f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.129318 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-2x28q"] Feb 16 12:50:37 crc kubenswrapper[4799]: W0216 12:50:37.143299 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc52c4130_5b91_4e36_ad36_8333675ee0a4.slice/crio-169b9793d6a9c9cfa12e1d63816e246bc8b0611f479151090593dbaac9f089a7 WatchSource:0}: Error finding container 169b9793d6a9c9cfa12e1d63816e246bc8b0611f479151090593dbaac9f089a7: Status 404 returned error can't find the container with id 169b9793d6a9c9cfa12e1d63816e246bc8b0611f479151090593dbaac9f089a7 Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.186687 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.231753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92c214b0-98ba-493c-a0eb-e465a172f9f7" (UID: "92c214b0-98ba-493c-a0eb-e465a172f9f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.234066 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92c214b0-98ba-493c-a0eb-e465a172f9f7" (UID: "92c214b0-98ba-493c-a0eb-e465a172f9f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.255296 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92c214b0-98ba-493c-a0eb-e465a172f9f7" (UID: "92c214b0-98ba-493c-a0eb-e465a172f9f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.255980 4799 scope.go:117] "RemoveContainer" containerID="ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd" Feb 16 12:50:37 crc kubenswrapper[4799]: E0216 12:50:37.256557 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd\": container with ID starting with ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd not found: ID does not exist" containerID="ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.256595 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd"} err="failed to get container status \"ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd\": rpc error: code = NotFound desc = could not find container \"ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd\": container with ID starting with ab8eced5b3c8056bf563c2425229a60381154c40ad2206c46c0fbd7409989ddd not found: ID does not exist" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.256622 4799 scope.go:117] "RemoveContainer" containerID="f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f" Feb 16 12:50:37 crc kubenswrapper[4799]: E0216 12:50:37.256964 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f\": container with ID starting with f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f not found: ID does not exist" containerID="f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.256987 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f"} err="failed to get container status \"f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f\": rpc error: code = NotFound desc = could not find container \"f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f\": container with ID starting with f9ec8a666cba27a8a20db59b04837934c46d53798ebeb3a51cf9e8daf4da842f not found: ID does not exist" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.291879 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.291922 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.291935 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.365253 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.494865 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043950ea-86bb-464a-b829-8816123fe1cd-operator-scripts\") pod \"043950ea-86bb-464a-b829-8816123fe1cd\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.494933 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xhp2\" (UniqueName: \"kubernetes.io/projected/043950ea-86bb-464a-b829-8816123fe1cd-kube-api-access-4xhp2\") pod \"043950ea-86bb-464a-b829-8816123fe1cd\" (UID: \"043950ea-86bb-464a-b829-8816123fe1cd\") " Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.495385 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043950ea-86bb-464a-b829-8816123fe1cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "043950ea-86bb-464a-b829-8816123fe1cd" (UID: "043950ea-86bb-464a-b829-8816123fe1cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.495694 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043950ea-86bb-464a-b829-8816123fe1cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.499288 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043950ea-86bb-464a-b829-8816123fe1cd-kube-api-access-4xhp2" (OuterVolumeSpecName: "kube-api-access-4xhp2") pod "043950ea-86bb-464a-b829-8816123fe1cd" (UID: "043950ea-86bb-464a-b829-8816123fe1cd"). InnerVolumeSpecName "kube-api-access-4xhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.515409 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92c214b0-98ba-493c-a0eb-e465a172f9f7" (UID: "92c214b0-98ba-493c-a0eb-e465a172f9f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.597345 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xhp2\" (UniqueName: \"kubernetes.io/projected/043950ea-86bb-464a-b829-8816123fe1cd-kube-api-access-4xhp2\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.597377 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92c214b0-98ba-493c-a0eb-e465a172f9f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.857504 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b5c968d55-8gh88"] Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.870788 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b5c968d55-8gh88"] Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.923410 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4e39-account-create-update-kwmxq" event={"ID":"f783521e-3e89-4fc3-bdb6-08bc1ee82739","Type":"ContainerStarted","Data":"83bd9a072dc28001f442eaaa9890fb65092b985458e5ad1a553a1ad026e23036"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.923459 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4e39-account-create-update-kwmxq" event={"ID":"f783521e-3e89-4fc3-bdb6-08bc1ee82739","Type":"ContainerStarted","Data":"11efc9e403014caeacea9522a69835031f62d00e9e7b59961aaa74a582d71982"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.925054 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2x28q" event={"ID":"762cb41d-d3c9-4b97-bdbf-7062f65fba96","Type":"ContainerStarted","Data":"ae861d3681d23f46d0fffa3c95f2c25287530965f3bbe5290b9a5fab29c563b3"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.926665 4799 generic.go:334] "Generic (PLEG): container finished" podID="201c84bc-cc45-471a-a86c-fe79ab2a2174" containerID="3e9c09e275a75b9cafa94a282b53925801b0763f43ffbbfe5048316f689fe85b" exitCode=0 Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.926732 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1745-account-create-update-h4zdh" event={"ID":"201c84bc-cc45-471a-a86c-fe79ab2a2174","Type":"ContainerDied","Data":"3e9c09e275a75b9cafa94a282b53925801b0763f43ffbbfe5048316f689fe85b"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.931138 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvv2w" event={"ID":"043950ea-86bb-464a-b829-8816123fe1cd","Type":"ContainerDied","Data":"efa75c66854df7696259ec2ab966ca7cd849339326e17e2a4df79e726f8530f6"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.931194 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa75c66854df7696259ec2ab966ca7cd849339326e17e2a4df79e726f8530f6" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.931269 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvv2w" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.952368 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4hjhh" event={"ID":"f90f7da9-52e1-4369-a123-145ec31299db","Type":"ContainerStarted","Data":"74e068cc325d38f01767ba58b734ee12d0b5d551a43aa3c5c1bb06d2568968a4"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.954489 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-4e39-account-create-update-kwmxq" podStartSLOduration=2.95445972 podStartE2EDuration="2.95445972s" podCreationTimestamp="2026-02-16 12:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:37.940930548 +0000 UTC m=+1143.533945882" watchObservedRunningTime="2026-02-16 12:50:37.95445972 +0000 UTC m=+1143.547475054" Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.961183 4799 generic.go:334] "Generic (PLEG): container finished" podID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerID="1ece960357bedd0051760d0fda5e677e707f2552a35c5e1ad02cb2d5bda1517b" exitCode=0 Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.962524 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" event={"ID":"c52c4130-5b91-4e36-ad36-8333675ee0a4","Type":"ContainerDied","Data":"1ece960357bedd0051760d0fda5e677e707f2552a35c5e1ad02cb2d5bda1517b"} Feb 16 12:50:37 crc kubenswrapper[4799]: I0216 12:50:37.962717 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" event={"ID":"c52c4130-5b91-4e36-ad36-8333675ee0a4","Type":"ContainerStarted","Data":"169b9793d6a9c9cfa12e1d63816e246bc8b0611f479151090593dbaac9f089a7"} Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.057918 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-4hjhh" podStartSLOduration=3.05789161 podStartE2EDuration="3.05789161s" podCreationTimestamp="2026-02-16 12:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:38.00756116 +0000 UTC m=+1143.600576504" watchObservedRunningTime="2026-02-16 12:50:38.05789161 +0000 UTC m=+1143.650906944" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.802186 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.821797 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.840003 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430af87d-ae1f-4b73-93e7-d8aa93192ae5-operator-scripts\") pod \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.840054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxblx\" (UniqueName: \"kubernetes.io/projected/591accad-9c4d-4e29-bdf9-d673ed928210-kube-api-access-mxblx\") pod \"591accad-9c4d-4e29-bdf9-d673ed928210\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.840069 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.840175 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc4gl\" (UniqueName: \"kubernetes.io/projected/430af87d-ae1f-4b73-93e7-d8aa93192ae5-kube-api-access-jc4gl\") pod \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\" (UID: \"430af87d-ae1f-4b73-93e7-d8aa93192ae5\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.840312 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591accad-9c4d-4e29-bdf9-d673ed928210-operator-scripts\") pod \"591accad-9c4d-4e29-bdf9-d673ed928210\" (UID: \"591accad-9c4d-4e29-bdf9-d673ed928210\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.840648 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430af87d-ae1f-4b73-93e7-d8aa93192ae5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "430af87d-ae1f-4b73-93e7-d8aa93192ae5" (UID: "430af87d-ae1f-4b73-93e7-d8aa93192ae5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.841069 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/591accad-9c4d-4e29-bdf9-d673ed928210-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "591accad-9c4d-4e29-bdf9-d673ed928210" (UID: "591accad-9c4d-4e29-bdf9-d673ed928210"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.841190 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/430af87d-ae1f-4b73-93e7-d8aa93192ae5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.853452 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430af87d-ae1f-4b73-93e7-d8aa93192ae5-kube-api-access-jc4gl" (OuterVolumeSpecName: "kube-api-access-jc4gl") pod "430af87d-ae1f-4b73-93e7-d8aa93192ae5" (UID: "430af87d-ae1f-4b73-93e7-d8aa93192ae5"). InnerVolumeSpecName "kube-api-access-jc4gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.880511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591accad-9c4d-4e29-bdf9-d673ed928210-kube-api-access-mxblx" (OuterVolumeSpecName: "kube-api-access-mxblx") pod "591accad-9c4d-4e29-bdf9-d673ed928210" (UID: "591accad-9c4d-4e29-bdf9-d673ed928210"). InnerVolumeSpecName "kube-api-access-mxblx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.942437 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-scripts\") pod \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.942595 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run-ovn\") pod \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.942716 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" (UID: "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.942768 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-log-ovn\") pod \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.942881 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" (UID: "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.943477 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ccf\" (UniqueName: \"kubernetes.io/projected/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-kube-api-access-d9ccf\") pod \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.943541 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-additional-scripts\") pod \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.943560 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run\") pod \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\" (UID: \"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec\") " Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.943883 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-scripts" (OuterVolumeSpecName: "scripts") pod "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" (UID: "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.943947 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run" (OuterVolumeSpecName: "var-run") pod "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" (UID: "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.944752 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" (UID: "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.947937 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-kube-api-access-d9ccf" (OuterVolumeSpecName: "kube-api-access-d9ccf") pod "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" (UID: "b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec"). InnerVolumeSpecName "kube-api-access-d9ccf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948772 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948805 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc4gl\" (UniqueName: \"kubernetes.io/projected/430af87d-ae1f-4b73-93e7-d8aa93192ae5-kube-api-access-jc4gl\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948821 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/591accad-9c4d-4e29-bdf9-d673ed928210-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948835 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948847 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948858 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxblx\" (UniqueName: \"kubernetes.io/projected/591accad-9c4d-4e29-bdf9-d673ed928210-kube-api-access-mxblx\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.948869 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.983307 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2a53-account-create-update-zc78k" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.983299 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2a53-account-create-update-zc78k" event={"ID":"430af87d-ae1f-4b73-93e7-d8aa93192ae5","Type":"ContainerDied","Data":"8d68fcc3001b62147efb68f3ee997ea74eb2c60ca548f209884f9ae21c795fb3"} Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.983428 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d68fcc3001b62147efb68f3ee997ea74eb2c60ca548f209884f9ae21c795fb3" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.984855 4799 generic.go:334] "Generic (PLEG): container finished" podID="f783521e-3e89-4fc3-bdb6-08bc1ee82739" containerID="83bd9a072dc28001f442eaaa9890fb65092b985458e5ad1a553a1ad026e23036" exitCode=0 Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.984905 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4e39-account-create-update-kwmxq" event={"ID":"f783521e-3e89-4fc3-bdb6-08bc1ee82739","Type":"ContainerDied","Data":"83bd9a072dc28001f442eaaa9890fb65092b985458e5ad1a553a1ad026e23036"} Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.986312 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-hlxms" event={"ID":"b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec","Type":"ContainerDied","Data":"d987869da5b213cb51cfa1c20220b0fbef427682fe91ce51c0565b6e407d6750"} Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.986389 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d987869da5b213cb51cfa1c20220b0fbef427682fe91ce51c0565b6e407d6750" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.986364 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-hlxms" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.989063 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-l2t2n" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.989105 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-l2t2n" event={"ID":"591accad-9c4d-4e29-bdf9-d673ed928210","Type":"ContainerDied","Data":"700f3358d42098b844ff5844ad919c1f9d9cbc50bfa3194b8702c0f6d168ad82"} Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.989187 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="700f3358d42098b844ff5844ad919c1f9d9cbc50bfa3194b8702c0f6d168ad82" Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.990721 4799 generic.go:334] "Generic (PLEG): container finished" podID="f90f7da9-52e1-4369-a123-145ec31299db" containerID="74e068cc325d38f01767ba58b734ee12d0b5d551a43aa3c5c1bb06d2568968a4" exitCode=0 Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.990827 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4hjhh" event={"ID":"f90f7da9-52e1-4369-a123-145ec31299db","Type":"ContainerDied","Data":"74e068cc325d38f01767ba58b734ee12d0b5d551a43aa3c5c1bb06d2568968a4"} Feb 16 12:50:38 crc kubenswrapper[4799]: I0216 12:50:38.995653 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerStarted","Data":"c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47"} Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.011982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" event={"ID":"c52c4130-5b91-4e36-ad36-8333675ee0a4","Type":"ContainerStarted","Data":"4efd09e53f7a8965112f9e61bdb188cf985320500ec87edfba54ba70c12e0155"} Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.012849 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.050403 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" podStartSLOduration=4.050385153 podStartE2EDuration="4.050385153s" podCreationTimestamp="2026-02-16 12:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:39.050373962 +0000 UTC m=+1144.643389296" watchObservedRunningTime="2026-02-16 12:50:39.050385153 +0000 UTC m=+1144.643400487" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.051902 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9ccf\" (UniqueName: \"kubernetes.io/projected/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-kube-api-access-d9ccf\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.051949 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.162968 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" path="/var/lib/kubelet/pods/92c214b0-98ba-493c-a0eb-e465a172f9f7/volumes" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.326802 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.363615 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201c84bc-cc45-471a-a86c-fe79ab2a2174-operator-scripts\") pod \"201c84bc-cc45-471a-a86c-fe79ab2a2174\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.363704 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhdw4\" (UniqueName: \"kubernetes.io/projected/201c84bc-cc45-471a-a86c-fe79ab2a2174-kube-api-access-xhdw4\") pod \"201c84bc-cc45-471a-a86c-fe79ab2a2174\" (UID: \"201c84bc-cc45-471a-a86c-fe79ab2a2174\") " Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.364495 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/201c84bc-cc45-471a-a86c-fe79ab2a2174-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "201c84bc-cc45-471a-a86c-fe79ab2a2174" (UID: "201c84bc-cc45-471a-a86c-fe79ab2a2174"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.365208 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/201c84bc-cc45-471a-a86c-fe79ab2a2174-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.369493 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201c84bc-cc45-471a-a86c-fe79ab2a2174-kube-api-access-xhdw4" (OuterVolumeSpecName: "kube-api-access-xhdw4") pod "201c84bc-cc45-471a-a86c-fe79ab2a2174" (UID: "201c84bc-cc45-471a-a86c-fe79ab2a2174"). InnerVolumeSpecName "kube-api-access-xhdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.467248 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhdw4\" (UniqueName: \"kubernetes.io/projected/201c84bc-cc45-471a-a86c-fe79ab2a2174-kube-api-access-xhdw4\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.747933 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zq5qf"] Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748443 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430af87d-ae1f-4b73-93e7-d8aa93192ae5" containerName="mariadb-account-create-update" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748468 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="430af87d-ae1f-4b73-93e7-d8aa93192ae5" containerName="mariadb-account-create-update" Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748490 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerName="dnsmasq-dns" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748500 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerName="dnsmasq-dns" Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748510 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043950ea-86bb-464a-b829-8816123fe1cd" containerName="mariadb-database-create" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748518 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="043950ea-86bb-464a-b829-8816123fe1cd" containerName="mariadb-database-create" Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748537 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591accad-9c4d-4e29-bdf9-d673ed928210" containerName="mariadb-database-create" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748549 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="591accad-9c4d-4e29-bdf9-d673ed928210" containerName="mariadb-database-create" Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748569 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201c84bc-cc45-471a-a86c-fe79ab2a2174" containerName="mariadb-account-create-update" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748578 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="201c84bc-cc45-471a-a86c-fe79ab2a2174" containerName="mariadb-account-create-update" Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748598 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerName="init" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748605 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerName="init" Feb 16 12:50:39 crc kubenswrapper[4799]: E0216 12:50:39.748614 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" containerName="ovn-config" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748622 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" containerName="ovn-config" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748821 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="043950ea-86bb-464a-b829-8816123fe1cd" containerName="mariadb-database-create" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748867 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" containerName="ovn-config" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748882 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="591accad-9c4d-4e29-bdf9-d673ed928210" containerName="mariadb-database-create" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748904 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="430af87d-ae1f-4b73-93e7-d8aa93192ae5" containerName="mariadb-account-create-update" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748924 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c214b0-98ba-493c-a0eb-e465a172f9f7" containerName="dnsmasq-dns" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.748938 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="201c84bc-cc45-471a-a86c-fe79ab2a2174" containerName="mariadb-account-create-update" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.749678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.753298 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.770144 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zq5qf"] Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.773623 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6db\" (UniqueName: \"kubernetes.io/projected/0e4b5dce-a5b1-4372-8138-03e7d62b9772-kube-api-access-jz6db\") pod \"root-account-create-update-zq5qf\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.773709 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4b5dce-a5b1-4372-8138-03e7d62b9772-operator-scripts\") pod \"root-account-create-update-zq5qf\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.878587 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4b5dce-a5b1-4372-8138-03e7d62b9772-operator-scripts\") pod \"root-account-create-update-zq5qf\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.878770 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6db\" (UniqueName: \"kubernetes.io/projected/0e4b5dce-a5b1-4372-8138-03e7d62b9772-kube-api-access-jz6db\") pod \"root-account-create-update-zq5qf\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.883959 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4b5dce-a5b1-4372-8138-03e7d62b9772-operator-scripts\") pod \"root-account-create-update-zq5qf\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.902361 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6db\" (UniqueName: \"kubernetes.io/projected/0e4b5dce-a5b1-4372-8138-03e7d62b9772-kube-api-access-jz6db\") pod \"root-account-create-update-zq5qf\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.944689 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wr6ph-config-hlxms"] Feb 16 12:50:39 crc kubenswrapper[4799]: I0216 12:50:39.954586 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wr6ph-config-hlxms"] Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.028209 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1745-account-create-update-h4zdh" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.028202 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1745-account-create-update-h4zdh" event={"ID":"201c84bc-cc45-471a-a86c-fe79ab2a2174","Type":"ContainerDied","Data":"695ed2313dd4b29b04be313f4edb04ab5c54147c5c8f5ab11cc39e3619dc1513"} Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.028348 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="695ed2313dd4b29b04be313f4edb04ab5c54147c5c8f5ab11cc39e3619dc1513" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.031689 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerStarted","Data":"345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09"} Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.071423 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.071399021 podStartE2EDuration="16.071399021s" podCreationTimestamp="2026-02-16 12:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:50:40.066098207 +0000 UTC m=+1145.659113541" watchObservedRunningTime="2026-02-16 12:50:40.071399021 +0000 UTC m=+1145.664414355" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.072613 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.107608 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wr6ph-config-578xg"] Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.110321 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.113173 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.137997 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-578xg"] Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.185425 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkhr\" (UniqueName: \"kubernetes.io/projected/bb552808-1de0-4b51-905b-ff12fedb0f96-kube-api-access-fgkhr\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.185476 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.185563 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run-ovn\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.185603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-log-ovn\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.185712 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-additional-scripts\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.185752 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-scripts\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.288820 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkhr\" (UniqueName: \"kubernetes.io/projected/bb552808-1de0-4b51-905b-ff12fedb0f96-kube-api-access-fgkhr\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.288858 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.288894 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run-ovn\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.288916 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-log-ovn\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.288947 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-additional-scripts\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.288974 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-scripts\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.290791 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-scripts\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.291286 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.291330 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run-ovn\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.291364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-log-ovn\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.291754 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-additional-scripts\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.314471 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkhr\" (UniqueName: \"kubernetes.io/projected/bb552808-1de0-4b51-905b-ff12fedb0f96-kube-api-access-fgkhr\") pod \"ovn-controller-wr6ph-config-578xg\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:40 crc kubenswrapper[4799]: I0216 12:50:40.470012 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:41 crc kubenswrapper[4799]: I0216 12:50:41.167566 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec" path="/var/lib/kubelet/pods/b8c8bf0b-dfce-4ca0-b466-ae6429e3f7ec/volumes" Feb 16 12:50:42 crc kubenswrapper[4799]: I0216 12:50:42.887914 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:42 crc kubenswrapper[4799]: I0216 12:50:42.905321 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.045364 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfkzg\" (UniqueName: \"kubernetes.io/projected/f783521e-3e89-4fc3-bdb6-08bc1ee82739-kube-api-access-wfkzg\") pod \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.045484 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f783521e-3e89-4fc3-bdb6-08bc1ee82739-operator-scripts\") pod \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\" (UID: \"f783521e-3e89-4fc3-bdb6-08bc1ee82739\") " Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.046660 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f783521e-3e89-4fc3-bdb6-08bc1ee82739-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f783521e-3e89-4fc3-bdb6-08bc1ee82739" (UID: "f783521e-3e89-4fc3-bdb6-08bc1ee82739"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.066431 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f783521e-3e89-4fc3-bdb6-08bc1ee82739-kube-api-access-wfkzg" (OuterVolumeSpecName: "kube-api-access-wfkzg") pod "f783521e-3e89-4fc3-bdb6-08bc1ee82739" (UID: "f783521e-3e89-4fc3-bdb6-08bc1ee82739"). InnerVolumeSpecName "kube-api-access-wfkzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.080797 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4e39-account-create-update-kwmxq" event={"ID":"f783521e-3e89-4fc3-bdb6-08bc1ee82739","Type":"ContainerDied","Data":"11efc9e403014caeacea9522a69835031f62d00e9e7b59961aaa74a582d71982"} Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.080834 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11efc9e403014caeacea9522a69835031f62d00e9e7b59961aaa74a582d71982" Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.080871 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4e39-account-create-update-kwmxq" Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.162033 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfkzg\" (UniqueName: \"kubernetes.io/projected/f783521e-3e89-4fc3-bdb6-08bc1ee82739-kube-api-access-wfkzg\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:43 crc kubenswrapper[4799]: I0216 12:50:43.162357 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f783521e-3e89-4fc3-bdb6-08bc1ee82739-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:44 crc kubenswrapper[4799]: I0216 12:50:44.862484 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:46 crc kubenswrapper[4799]: I0216 12:50:46.161735 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:50:46 crc kubenswrapper[4799]: I0216 12:50:46.248338 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f789c7d5f-sxvnw"] Feb 16 12:50:46 crc kubenswrapper[4799]: I0216 12:50:46.249154 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="dnsmasq-dns" containerID="cri-o://a602c4b86160a263e1d1ca0d1ffdb9bb89978558e649511ed87e0fd8e5be8c30" gracePeriod=10 Feb 16 12:50:47 crc kubenswrapper[4799]: I0216 12:50:47.124526 4799 generic.go:334] "Generic (PLEG): container finished" podID="cf16668a-2109-479a-a133-77530f391656" containerID="a602c4b86160a263e1d1ca0d1ffdb9bb89978558e649511ed87e0fd8e5be8c30" exitCode=0 Feb 16 12:50:47 crc kubenswrapper[4799]: I0216 12:50:47.124583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" event={"ID":"cf16668a-2109-479a-a133-77530f391656","Type":"ContainerDied","Data":"a602c4b86160a263e1d1ca0d1ffdb9bb89978558e649511ed87e0fd8e5be8c30"} Feb 16 12:50:48 crc kubenswrapper[4799]: I0216 12:50:48.806306 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Feb 16 12:50:48 crc kubenswrapper[4799]: I0216 12:50:48.936613 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.080804 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqw9w\" (UniqueName: \"kubernetes.io/projected/f90f7da9-52e1-4369-a123-145ec31299db-kube-api-access-rqw9w\") pod \"f90f7da9-52e1-4369-a123-145ec31299db\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.080890 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90f7da9-52e1-4369-a123-145ec31299db-operator-scripts\") pod \"f90f7da9-52e1-4369-a123-145ec31299db\" (UID: \"f90f7da9-52e1-4369-a123-145ec31299db\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.081568 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90f7da9-52e1-4369-a123-145ec31299db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f90f7da9-52e1-4369-a123-145ec31299db" (UID: "f90f7da9-52e1-4369-a123-145ec31299db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.088851 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90f7da9-52e1-4369-a123-145ec31299db-kube-api-access-rqw9w" (OuterVolumeSpecName: "kube-api-access-rqw9w") pod "f90f7da9-52e1-4369-a123-145ec31299db" (UID: "f90f7da9-52e1-4369-a123-145ec31299db"). InnerVolumeSpecName "kube-api-access-rqw9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.147486 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4hjhh" event={"ID":"f90f7da9-52e1-4369-a123-145ec31299db","Type":"ContainerDied","Data":"7b51f1cf029e51422e694bfe6ee907086cba1b75f9941436c4b29f6df4cdeb32"} Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.147531 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b51f1cf029e51422e694bfe6ee907086cba1b75f9941436c4b29f6df4cdeb32" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.147586 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4hjhh" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.183225 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqw9w\" (UniqueName: \"kubernetes.io/projected/f90f7da9-52e1-4369-a123-145ec31299db-kube-api-access-rqw9w\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.183292 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90f7da9-52e1-4369-a123-145ec31299db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.621148 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.801947 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-config\") pod \"cf16668a-2109-479a-a133-77530f391656\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.802290 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-sb\") pod \"cf16668a-2109-479a-a133-77530f391656\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.802331 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-dns-svc\") pod \"cf16668a-2109-479a-a133-77530f391656\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.802440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq7pd\" (UniqueName: \"kubernetes.io/projected/cf16668a-2109-479a-a133-77530f391656-kube-api-access-vq7pd\") pod \"cf16668a-2109-479a-a133-77530f391656\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.802553 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-nb\") pod \"cf16668a-2109-479a-a133-77530f391656\" (UID: \"cf16668a-2109-479a-a133-77530f391656\") " Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.809451 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf16668a-2109-479a-a133-77530f391656-kube-api-access-vq7pd" (OuterVolumeSpecName: "kube-api-access-vq7pd") pod "cf16668a-2109-479a-a133-77530f391656" (UID: "cf16668a-2109-479a-a133-77530f391656"). InnerVolumeSpecName "kube-api-access-vq7pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.866229 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf16668a-2109-479a-a133-77530f391656" (UID: "cf16668a-2109-479a-a133-77530f391656"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.881460 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-config" (OuterVolumeSpecName: "config") pod "cf16668a-2109-479a-a133-77530f391656" (UID: "cf16668a-2109-479a-a133-77530f391656"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.887950 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf16668a-2109-479a-a133-77530f391656" (UID: "cf16668a-2109-479a-a133-77530f391656"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.901610 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf16668a-2109-479a-a133-77530f391656" (UID: "cf16668a-2109-479a-a133-77530f391656"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.904462 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.904500 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.904512 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.904523 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf16668a-2109-479a-a133-77530f391656-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:49 crc kubenswrapper[4799]: I0216 12:50:49.904536 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq7pd\" (UniqueName: \"kubernetes.io/projected/cf16668a-2109-479a-a133-77530f391656-kube-api-access-vq7pd\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.016951 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wr6ph-config-578xg"] Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.088671 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zq5qf"] Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.158642 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" event={"ID":"cf16668a-2109-479a-a133-77530f391656","Type":"ContainerDied","Data":"ff2a47edc623d044099fba4576e848c93223ffb9a42ee8ff23cd2a111236411b"} Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.158672 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f789c7d5f-sxvnw" Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.158713 4799 scope.go:117] "RemoveContainer" containerID="a602c4b86160a263e1d1ca0d1ffdb9bb89978558e649511ed87e0fd8e5be8c30" Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.160710 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-578xg" event={"ID":"bb552808-1de0-4b51-905b-ff12fedb0f96","Type":"ContainerStarted","Data":"fabed9d83171190a1c1fd83d66432bfec2c18f5b9c9da05adbe879f64a84b96c"} Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.162956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zq5qf" event={"ID":"0e4b5dce-a5b1-4372-8138-03e7d62b9772","Type":"ContainerStarted","Data":"217f9aff8f54c9636f4c5be4493d19a1ed3052abcc8bbb2cf64b90eb2f5fa34f"} Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.165158 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2x28q" event={"ID":"762cb41d-d3c9-4b97-bdbf-7062f65fba96","Type":"ContainerStarted","Data":"16503752f0b490ed00baa9558939b9055ee4a233d05c6d184de771a79644cf49"} Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.168219 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8scmb" event={"ID":"1144c46a-41c9-4032-8811-2b3c930586f9","Type":"ContainerStarted","Data":"dbd1aff9ab43870c091e16e6f445ec17ae85f722d36250d5a7919d6c6660c5a0"} Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.192650 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-2x28q" podStartSLOduration=2.777061734 podStartE2EDuration="15.192626219s" podCreationTimestamp="2026-02-16 12:50:35 +0000 UTC" firstStartedPulling="2026-02-16 12:50:37.256144029 +0000 UTC m=+1142.849159363" lastFinishedPulling="2026-02-16 12:50:49.671708504 +0000 UTC m=+1155.264723848" observedRunningTime="2026-02-16 12:50:50.186646766 +0000 UTC m=+1155.779662100" watchObservedRunningTime="2026-02-16 12:50:50.192626219 +0000 UTC m=+1155.785641553" Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.199430 4799 scope.go:117] "RemoveContainer" containerID="906769aedc4987ca62b5057b97c1bad9332b660f02bedd4d8defcb3e2caeccfd" Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.215891 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8scmb" podStartSLOduration=2.109505017 podStartE2EDuration="17.215866043s" podCreationTimestamp="2026-02-16 12:50:33 +0000 UTC" firstStartedPulling="2026-02-16 12:50:34.447712677 +0000 UTC m=+1140.040728011" lastFinishedPulling="2026-02-16 12:50:49.554073703 +0000 UTC m=+1155.147089037" observedRunningTime="2026-02-16 12:50:50.202510136 +0000 UTC m=+1155.795525470" watchObservedRunningTime="2026-02-16 12:50:50.215866043 +0000 UTC m=+1155.808881377" Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.240267 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f789c7d5f-sxvnw"] Feb 16 12:50:50 crc kubenswrapper[4799]: I0216 12:50:50.250648 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f789c7d5f-sxvnw"] Feb 16 12:50:51 crc kubenswrapper[4799]: I0216 12:50:51.170431 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf16668a-2109-479a-a133-77530f391656" path="/var/lib/kubelet/pods/cf16668a-2109-479a-a133-77530f391656/volumes" Feb 16 12:50:51 crc kubenswrapper[4799]: I0216 12:50:51.180552 4799 generic.go:334] "Generic (PLEG): container finished" podID="bb552808-1de0-4b51-905b-ff12fedb0f96" containerID="788ddcab4cad9150958fd9efdf483fe720eeedd1fbc8f3e163e71118b3abd8f6" exitCode=0 Feb 16 12:50:51 crc kubenswrapper[4799]: I0216 12:50:51.180667 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-578xg" event={"ID":"bb552808-1de0-4b51-905b-ff12fedb0f96","Type":"ContainerDied","Data":"788ddcab4cad9150958fd9efdf483fe720eeedd1fbc8f3e163e71118b3abd8f6"} Feb 16 12:50:51 crc kubenswrapper[4799]: I0216 12:50:51.183720 4799 generic.go:334] "Generic (PLEG): container finished" podID="0e4b5dce-a5b1-4372-8138-03e7d62b9772" containerID="795813fdf916b015d629fdd8271f5a0b2197c8ab7f8bc40338e11983626ec8ad" exitCode=0 Feb 16 12:50:51 crc kubenswrapper[4799]: I0216 12:50:51.183790 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zq5qf" event={"ID":"0e4b5dce-a5b1-4372-8138-03e7d62b9772","Type":"ContainerDied","Data":"795813fdf916b015d629fdd8271f5a0b2197c8ab7f8bc40338e11983626ec8ad"} Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.591510 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.600225 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777493 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run-ovn\") pod \"bb552808-1de0-4b51-905b-ff12fedb0f96\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777563 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run\") pod \"bb552808-1de0-4b51-905b-ff12fedb0f96\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777615 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-log-ovn\") pod \"bb552808-1de0-4b51-905b-ff12fedb0f96\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777663 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgkhr\" (UniqueName: \"kubernetes.io/projected/bb552808-1de0-4b51-905b-ff12fedb0f96-kube-api-access-fgkhr\") pod \"bb552808-1de0-4b51-905b-ff12fedb0f96\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777723 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-additional-scripts\") pod \"bb552808-1de0-4b51-905b-ff12fedb0f96\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777767 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz6db\" (UniqueName: \"kubernetes.io/projected/0e4b5dce-a5b1-4372-8138-03e7d62b9772-kube-api-access-jz6db\") pod \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777654 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bb552808-1de0-4b51-905b-ff12fedb0f96" (UID: "bb552808-1de0-4b51-905b-ff12fedb0f96"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777686 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bb552808-1de0-4b51-905b-ff12fedb0f96" (UID: "bb552808-1de0-4b51-905b-ff12fedb0f96"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777756 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run" (OuterVolumeSpecName: "var-run") pod "bb552808-1de0-4b51-905b-ff12fedb0f96" (UID: "bb552808-1de0-4b51-905b-ff12fedb0f96"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777836 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4b5dce-a5b1-4372-8138-03e7d62b9772-operator-scripts\") pod \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\" (UID: \"0e4b5dce-a5b1-4372-8138-03e7d62b9772\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.777987 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-scripts\") pod \"bb552808-1de0-4b51-905b-ff12fedb0f96\" (UID: \"bb552808-1de0-4b51-905b-ff12fedb0f96\") " Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.778471 4799 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.778498 4799 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.778512 4799 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb552808-1de0-4b51-905b-ff12fedb0f96-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.778726 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bb552808-1de0-4b51-905b-ff12fedb0f96" (UID: "bb552808-1de0-4b51-905b-ff12fedb0f96"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.779022 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-scripts" (OuterVolumeSpecName: "scripts") pod "bb552808-1de0-4b51-905b-ff12fedb0f96" (UID: "bb552808-1de0-4b51-905b-ff12fedb0f96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.779875 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4b5dce-a5b1-4372-8138-03e7d62b9772-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e4b5dce-a5b1-4372-8138-03e7d62b9772" (UID: "0e4b5dce-a5b1-4372-8138-03e7d62b9772"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.784001 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb552808-1de0-4b51-905b-ff12fedb0f96-kube-api-access-fgkhr" (OuterVolumeSpecName: "kube-api-access-fgkhr") pod "bb552808-1de0-4b51-905b-ff12fedb0f96" (UID: "bb552808-1de0-4b51-905b-ff12fedb0f96"). InnerVolumeSpecName "kube-api-access-fgkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.787417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4b5dce-a5b1-4372-8138-03e7d62b9772-kube-api-access-jz6db" (OuterVolumeSpecName: "kube-api-access-jz6db") pod "0e4b5dce-a5b1-4372-8138-03e7d62b9772" (UID: "0e4b5dce-a5b1-4372-8138-03e7d62b9772"). InnerVolumeSpecName "kube-api-access-jz6db". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.879835 4799 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.879887 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz6db\" (UniqueName: \"kubernetes.io/projected/0e4b5dce-a5b1-4372-8138-03e7d62b9772-kube-api-access-jz6db\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.879903 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4b5dce-a5b1-4372-8138-03e7d62b9772-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.879918 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb552808-1de0-4b51-905b-ff12fedb0f96-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:52 crc kubenswrapper[4799]: I0216 12:50:52.879931 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgkhr\" (UniqueName: \"kubernetes.io/projected/bb552808-1de0-4b51-905b-ff12fedb0f96-kube-api-access-fgkhr\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.206274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wr6ph-config-578xg" event={"ID":"bb552808-1de0-4b51-905b-ff12fedb0f96","Type":"ContainerDied","Data":"fabed9d83171190a1c1fd83d66432bfec2c18f5b9c9da05adbe879f64a84b96c"} Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.206882 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fabed9d83171190a1c1fd83d66432bfec2c18f5b9c9da05adbe879f64a84b96c" Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.206362 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wr6ph-config-578xg" Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.208998 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zq5qf" event={"ID":"0e4b5dce-a5b1-4372-8138-03e7d62b9772","Type":"ContainerDied","Data":"217f9aff8f54c9636f4c5be4493d19a1ed3052abcc8bbb2cf64b90eb2f5fa34f"} Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.209051 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="217f9aff8f54c9636f4c5be4493d19a1ed3052abcc8bbb2cf64b90eb2f5fa34f" Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.209120 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zq5qf" Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.704223 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wr6ph-config-578xg"] Feb 16 12:50:53 crc kubenswrapper[4799]: I0216 12:50:53.716436 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wr6ph-config-578xg"] Feb 16 12:50:54 crc kubenswrapper[4799]: I0216 12:50:54.863296 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:54 crc kubenswrapper[4799]: I0216 12:50:54.874356 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:55 crc kubenswrapper[4799]: I0216 12:50:55.161609 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb552808-1de0-4b51-905b-ff12fedb0f96" path="/var/lib/kubelet/pods/bb552808-1de0-4b51-905b-ff12fedb0f96/volumes" Feb 16 12:50:55 crc kubenswrapper[4799]: I0216 12:50:55.231069 4799 generic.go:334] "Generic (PLEG): container finished" podID="762cb41d-d3c9-4b97-bdbf-7062f65fba96" containerID="16503752f0b490ed00baa9558939b9055ee4a233d05c6d184de771a79644cf49" exitCode=0 Feb 16 12:50:55 crc kubenswrapper[4799]: I0216 12:50:55.231184 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2x28q" event={"ID":"762cb41d-d3c9-4b97-bdbf-7062f65fba96","Type":"ContainerDied","Data":"16503752f0b490ed00baa9558939b9055ee4a233d05c6d184de771a79644cf49"} Feb 16 12:50:55 crc kubenswrapper[4799]: I0216 12:50:55.239901 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.244391 4799 generic.go:334] "Generic (PLEG): container finished" podID="1144c46a-41c9-4032-8811-2b3c930586f9" containerID="dbd1aff9ab43870c091e16e6f445ec17ae85f722d36250d5a7919d6c6660c5a0" exitCode=0 Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.244545 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8scmb" event={"ID":"1144c46a-41c9-4032-8811-2b3c930586f9","Type":"ContainerDied","Data":"dbd1aff9ab43870c091e16e6f445ec17ae85f722d36250d5a7919d6c6660c5a0"} Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.657341 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.751434 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-combined-ca-bundle\") pod \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.751486 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvhfl\" (UniqueName: \"kubernetes.io/projected/762cb41d-d3c9-4b97-bdbf-7062f65fba96-kube-api-access-pvhfl\") pod \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.751681 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-db-sync-config-data\") pod \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.751743 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-config-data\") pod \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\" (UID: \"762cb41d-d3c9-4b97-bdbf-7062f65fba96\") " Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.758355 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762cb41d-d3c9-4b97-bdbf-7062f65fba96-kube-api-access-pvhfl" (OuterVolumeSpecName: "kube-api-access-pvhfl") pod "762cb41d-d3c9-4b97-bdbf-7062f65fba96" (UID: "762cb41d-d3c9-4b97-bdbf-7062f65fba96"). InnerVolumeSpecName "kube-api-access-pvhfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.760057 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "762cb41d-d3c9-4b97-bdbf-7062f65fba96" (UID: "762cb41d-d3c9-4b97-bdbf-7062f65fba96"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.784807 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "762cb41d-d3c9-4b97-bdbf-7062f65fba96" (UID: "762cb41d-d3c9-4b97-bdbf-7062f65fba96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.823171 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-config-data" (OuterVolumeSpecName: "config-data") pod "762cb41d-d3c9-4b97-bdbf-7062f65fba96" (UID: "762cb41d-d3c9-4b97-bdbf-7062f65fba96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.853204 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.853230 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.853241 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/762cb41d-d3c9-4b97-bdbf-7062f65fba96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:56 crc kubenswrapper[4799]: I0216 12:50:56.853251 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvhfl\" (UniqueName: \"kubernetes.io/projected/762cb41d-d3c9-4b97-bdbf-7062f65fba96-kube-api-access-pvhfl\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.257996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2x28q" event={"ID":"762cb41d-d3c9-4b97-bdbf-7062f65fba96","Type":"ContainerDied","Data":"ae861d3681d23f46d0fffa3c95f2c25287530965f3bbe5290b9a5fab29c563b3"} Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.258065 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae861d3681d23f46d0fffa3c95f2c25287530965f3bbe5290b9a5fab29c563b3" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.258065 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2x28q" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.683112 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.773255 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-combined-ca-bundle\") pod \"1144c46a-41c9-4032-8811-2b3c930586f9\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.773451 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjqvn\" (UniqueName: \"kubernetes.io/projected/1144c46a-41c9-4032-8811-2b3c930586f9-kube-api-access-cjqvn\") pod \"1144c46a-41c9-4032-8811-2b3c930586f9\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.773514 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-config-data\") pod \"1144c46a-41c9-4032-8811-2b3c930586f9\" (UID: \"1144c46a-41c9-4032-8811-2b3c930586f9\") " Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.777622 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1144c46a-41c9-4032-8811-2b3c930586f9-kube-api-access-cjqvn" (OuterVolumeSpecName: "kube-api-access-cjqvn") pod "1144c46a-41c9-4032-8811-2b3c930586f9" (UID: "1144c46a-41c9-4032-8811-2b3c930586f9"). InnerVolumeSpecName "kube-api-access-cjqvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.801597 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1144c46a-41c9-4032-8811-2b3c930586f9" (UID: "1144c46a-41c9-4032-8811-2b3c930586f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.825477 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-config-data" (OuterVolumeSpecName: "config-data") pod "1144c46a-41c9-4032-8811-2b3c930586f9" (UID: "1144c46a-41c9-4032-8811-2b3c930586f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.875718 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.875755 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjqvn\" (UniqueName: \"kubernetes.io/projected/1144c46a-41c9-4032-8811-2b3c930586f9-kube-api-access-cjqvn\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:57 crc kubenswrapper[4799]: I0216 12:50:57.875766 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1144c46a-41c9-4032-8811-2b3c930586f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.272967 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8scmb" event={"ID":"1144c46a-41c9-4032-8811-2b3c930586f9","Type":"ContainerDied","Data":"faf68876814b249b04089e3db8bc4ac7d3dcad1f4a63bcbb0f77bc22d823bccf"} Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.273435 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faf68876814b249b04089e3db8bc4ac7d3dcad1f4a63bcbb0f77bc22d823bccf" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.273081 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8scmb" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530068 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8gv69"] Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530658 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762cb41d-d3c9-4b97-bdbf-7062f65fba96" containerName="watcher-db-sync" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530682 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="762cb41d-d3c9-4b97-bdbf-7062f65fba96" containerName="watcher-db-sync" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530702 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f783521e-3e89-4fc3-bdb6-08bc1ee82739" containerName="mariadb-account-create-update" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530711 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f783521e-3e89-4fc3-bdb6-08bc1ee82739" containerName="mariadb-account-create-update" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530737 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb552808-1de0-4b51-905b-ff12fedb0f96" containerName="ovn-config" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530745 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb552808-1de0-4b51-905b-ff12fedb0f96" containerName="ovn-config" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530776 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="dnsmasq-dns" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530785 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="dnsmasq-dns" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530801 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1144c46a-41c9-4032-8811-2b3c930586f9" containerName="keystone-db-sync" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530808 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1144c46a-41c9-4032-8811-2b3c930586f9" containerName="keystone-db-sync" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530823 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="init" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530831 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="init" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530843 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4b5dce-a5b1-4372-8138-03e7d62b9772" containerName="mariadb-account-create-update" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530852 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4b5dce-a5b1-4372-8138-03e7d62b9772" containerName="mariadb-account-create-update" Feb 16 12:50:58 crc kubenswrapper[4799]: E0216 12:50:58.530865 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90f7da9-52e1-4369-a123-145ec31299db" containerName="mariadb-database-create" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.530873 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90f7da9-52e1-4369-a123-145ec31299db" containerName="mariadb-database-create" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531096 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="762cb41d-d3c9-4b97-bdbf-7062f65fba96" containerName="watcher-db-sync" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531148 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90f7da9-52e1-4369-a123-145ec31299db" containerName="mariadb-database-create" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531170 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1144c46a-41c9-4032-8811-2b3c930586f9" containerName="keystone-db-sync" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531189 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb552808-1de0-4b51-905b-ff12fedb0f96" containerName="ovn-config" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531203 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf16668a-2109-479a-a133-77530f391656" containerName="dnsmasq-dns" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531218 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4b5dce-a5b1-4372-8138-03e7d62b9772" containerName="mariadb-account-create-update" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531232 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f783521e-3e89-4fc3-bdb6-08bc1ee82739" containerName="mariadb-account-create-update" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.531989 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.535533 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.547204 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.547476 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.547578 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54d6894697-8g5lt"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.547653 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.547914 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqmr2" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.559540 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.575109 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8gv69"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.596386 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d6894697-8g5lt"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.636190 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.637459 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.643567 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.647367 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-rwt49" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.652194 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695355 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-sb\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695411 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-nb\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-svc\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695509 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-scripts\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695538 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-fernet-keys\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695560 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-config-data\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695576 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-combined-ca-bundle\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695594 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-credential-keys\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695621 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-swift-storage-0\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695639 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8rbm\" (UniqueName: \"kubernetes.io/projected/6f0f021f-47b3-4b51-bee2-ce0121992d9f-kube-api-access-q8rbm\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695668 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-config\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.695689 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzmdp\" (UniqueName: \"kubernetes.io/projected/8251c8d3-8cd6-4118-bb20-5bffe115cd32-kube-api-access-jzmdp\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.711554 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.713212 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.716537 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.750683 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.765030 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.766351 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.769736 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802057 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-scripts\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802117 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92cefdaf-4a4b-4771-9b15-0666298881e8-logs\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802159 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-fernet-keys\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802182 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802202 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-config-data\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802220 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-combined-ca-bundle\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802239 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-credential-keys\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802258 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-config-data\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802280 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802300 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-swift-storage-0\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802316 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-logs\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802335 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8rbm\" (UniqueName: \"kubernetes.io/projected/6f0f021f-47b3-4b51-bee2-ce0121992d9f-kube-api-access-q8rbm\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802364 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802383 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwk6d\" (UniqueName: \"kubernetes.io/projected/92cefdaf-4a4b-4771-9b15-0666298881e8-kube-api-access-fwk6d\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802401 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-config\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802420 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzmdp\" (UniqueName: \"kubernetes.io/projected/8251c8d3-8cd6-4118-bb20-5bffe115cd32-kube-api-access-jzmdp\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802443 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-sb\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802465 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-nb\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802482 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-svc\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802512 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.802533 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzxs\" (UniqueName: \"kubernetes.io/projected/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-kube-api-access-zzzxs\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.804885 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-sb\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.805531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-config\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.806352 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-swift-storage-0\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.806888 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-svc\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.807007 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-nb\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.821167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-scripts\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.822639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-config-data\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.824292 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-fernet-keys\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.828824 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-combined-ca-bundle\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.850113 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8rbm\" (UniqueName: \"kubernetes.io/projected/6f0f021f-47b3-4b51-bee2-ce0121992d9f-kube-api-access-q8rbm\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.853005 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-credential-keys\") pod \"keystone-bootstrap-8gv69\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.853643 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzmdp\" (UniqueName: \"kubernetes.io/projected/8251c8d3-8cd6-4118-bb20-5bffe115cd32-kube-api-access-jzmdp\") pod \"dnsmasq-dns-54d6894697-8g5lt\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.859498 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.894183 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pg7p\" (UniqueName: \"kubernetes.io/projected/c901ce2e-6b4a-464e-8679-72329a180956-kube-api-access-2pg7p\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-config-data\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907483 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907502 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-logs\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c901ce2e-6b4a-464e-8679-72329a180956-logs\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907557 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907575 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907593 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwk6d\" (UniqueName: \"kubernetes.io/projected/92cefdaf-4a4b-4771-9b15-0666298881e8-kube-api-access-fwk6d\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907632 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-config-data\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907667 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907692 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzxs\" (UniqueName: \"kubernetes.io/projected/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-kube-api-access-zzzxs\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907712 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907781 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92cefdaf-4a4b-4771-9b15-0666298881e8-logs\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.907817 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.908881 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.910022 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92cefdaf-4a4b-4771-9b15-0666298881e8-logs\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.910287 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-logs\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.920837 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.922749 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.929609 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.931521 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-config-data\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.932867 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:58 crc kubenswrapper[4799]: I0216 12:50:58.959013 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-j8vxl"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.014955 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pg7p\" (UniqueName: \"kubernetes.io/projected/c901ce2e-6b4a-464e-8679-72329a180956-kube-api-access-2pg7p\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.015723 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c901ce2e-6b4a-464e-8679-72329a180956-logs\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.015845 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.016037 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-config-data\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.021155 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c901ce2e-6b4a-464e-8679-72329a180956-logs\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.041802 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.054849 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwk6d\" (UniqueName: \"kubernetes.io/projected/92cefdaf-4a4b-4771-9b15-0666298881e8-kube-api-access-fwk6d\") pod \"watcher-applier-0\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " pod="openstack/watcher-applier-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.056204 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mdlfb" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.056494 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.077896 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.088887 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-config-data\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.089012 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.093066 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pg7p\" (UniqueName: \"kubernetes.io/projected/c901ce2e-6b4a-464e-8679-72329a180956-kube-api-access-2pg7p\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.104153 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.107467 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.120879 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzxs\" (UniqueName: \"kubernetes.io/projected/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-kube-api-access-zzzxs\") pod \"watcher-decision-engine-0\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.184678 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-m5dfr"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.185793 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.202622 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tswfv" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.202876 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.203069 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211378 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-db-sync-config-data\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211531 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjgv\" (UniqueName: \"kubernetes.io/projected/407468d3-5baf-4bde-af39-679ed83889c8-kube-api-access-wsjgv\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-scripts\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211696 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-combined-ca-bundle\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211728 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-config-data\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211771 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-combined-ca-bundle\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211901 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-config\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211918 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkjf\" (UniqueName: \"kubernetes.io/projected/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-kube-api-access-7dkjf\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.211943 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-etc-machine-id\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.253194 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-775f9c4c9f-rxgkm"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.254917 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.263514 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.278975 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.279310 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.279482 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.279616 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-njdkn" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.292245 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j8vxl"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315339 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-config\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315377 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkjf\" (UniqueName: \"kubernetes.io/projected/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-kube-api-access-7dkjf\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315397 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-etc-machine-id\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315453 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-db-sync-config-data\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315477 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae9f8668-6aca-49c6-9386-9adab98879a7-horizon-secret-key\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjgv\" (UniqueName: \"kubernetes.io/projected/407468d3-5baf-4bde-af39-679ed83889c8-kube-api-access-wsjgv\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315539 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-scripts\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315564 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-scripts\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315584 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-config-data\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315601 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9f8668-6aca-49c6-9386-9adab98879a7-logs\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315618 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-combined-ca-bundle\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315638 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-config-data\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315661 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-combined-ca-bundle\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.315681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmxw\" (UniqueName: \"kubernetes.io/projected/ae9f8668-6aca-49c6-9386-9adab98879a7-kube-api-access-bfmxw\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.322138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-etc-machine-id\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.327906 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-config\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.334054 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-775f9c4c9f-rxgkm"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.334071 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-config-data\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.338065 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-combined-ca-bundle\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.338472 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.345641 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-scripts\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.345740 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjgv\" (UniqueName: \"kubernetes.io/projected/407468d3-5baf-4bde-af39-679ed83889c8-kube-api-access-wsjgv\") pod \"neutron-db-sync-j8vxl\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.347050 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-db-sync-config-data\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.349529 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m5dfr"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.353347 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-combined-ca-bundle\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.374313 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkjf\" (UniqueName: \"kubernetes.io/projected/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-kube-api-access-7dkjf\") pod \"cinder-db-sync-m5dfr\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.378182 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d6894697-8g5lt"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.393147 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.404438 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x2bbw"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.405544 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.408279 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9jzxp" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.409060 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.414145 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.416867 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfmxw\" (UniqueName: \"kubernetes.io/projected/ae9f8668-6aca-49c6-9386-9adab98879a7-kube-api-access-bfmxw\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.416923 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-combined-ca-bundle\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.416985 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-db-sync-config-data\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.417066 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl52n\" (UniqueName: \"kubernetes.io/projected/e821341e-3e99-4606-a96d-00adad2f39fb-kube-api-access-sl52n\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.417104 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae9f8668-6aca-49c6-9386-9adab98879a7-horizon-secret-key\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.417188 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-scripts\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.417219 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-config-data\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.417241 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9f8668-6aca-49c6-9386-9adab98879a7-logs\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.418449 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9f8668-6aca-49c6-9386-9adab98879a7-logs\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.418782 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-scripts\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.419908 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-config-data\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.425283 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae9f8668-6aca-49c6-9386-9adab98879a7-horizon-secret-key\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.439081 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rczq6"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.440324 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.447090 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.447346 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.456157 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9hnkl" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.467153 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.468726 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.472850 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.473192 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.473523 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4c8qx" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.475623 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.500238 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x2bbw"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.503197 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfmxw\" (UniqueName: \"kubernetes.io/projected/ae9f8668-6aca-49c6-9386-9adab98879a7-kube-api-access-bfmxw\") pod \"horizon-775f9c4c9f-rxgkm\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531088 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xp5\" (UniqueName: \"kubernetes.io/projected/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-kube-api-access-q5xp5\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531152 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-logs\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-combined-ca-bundle\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531241 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-config-data\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531293 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-db-sync-config-data\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531379 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl52n\" (UniqueName: \"kubernetes.io/projected/e821341e-3e99-4606-a96d-00adad2f39fb-kube-api-access-sl52n\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531419 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-combined-ca-bundle\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.531437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-scripts\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.557640 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-combined-ca-bundle\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.559718 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-db-sync-config-data\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.591811 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl52n\" (UniqueName: \"kubernetes.io/projected/e821341e-3e99-4606-a96d-00adad2f39fb-kube-api-access-sl52n\") pod \"barbican-db-sync-x2bbw\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.609038 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.609813 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rczq6"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635662 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635711 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635744 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-config-data\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635771 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-logs\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635787 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635812 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635847 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-scripts\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635863 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znr5b\" (UniqueName: \"kubernetes.io/projected/300e5319-9412-411d-8c94-5fbe2b001d54-kube-api-access-znr5b\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635900 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-combined-ca-bundle\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635916 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-scripts\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-config-data\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.635993 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5xp5\" (UniqueName: \"kubernetes.io/projected/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-kube-api-access-q5xp5\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.636012 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-logs\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.636404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-logs\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.650668 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-scripts\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.651060 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668f78969f-gvgfh"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.660036 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.668080 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-combined-ca-bundle\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.692455 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5xp5\" (UniqueName: \"kubernetes.io/projected/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-kube-api-access-q5xp5\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.693027 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.695252 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-config-data\") pod \"placement-db-sync-rczq6\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.712227 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.723872 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738039 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-scripts\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738090 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znr5b\" (UniqueName: \"kubernetes.io/projected/300e5319-9412-411d-8c94-5fbe2b001d54-kube-api-access-znr5b\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738266 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-config-data\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738321 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738350 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738390 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-logs\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738413 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.738443 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.750541 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-logs\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.754153 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.756040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.760471 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.774152 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.779790 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.780188 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.783011 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znr5b\" (UniqueName: \"kubernetes.io/projected/300e5319-9412-411d-8c94-5fbe2b001d54-kube-api-access-znr5b\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.786899 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.787146 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-scripts\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.791420 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.806438 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-config-data\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.812203 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.829550 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rczq6" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.842336 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-config\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.842502 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-swift-storage-0\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.842599 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkbn\" (UniqueName: \"kubernetes.io/projected/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-kube-api-access-pbkbn\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.842740 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-nb\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.842793 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-sb\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.842863 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-svc\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.864519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.870637 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668f78969f-gvgfh"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.888980 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d9fccfddf-b9jg7"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.892194 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.900690 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.943612 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944097 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-swift-storage-0\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944161 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-log-httpd\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944202 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkbn\" (UniqueName: \"kubernetes.io/projected/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-kube-api-access-pbkbn\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944245 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944283 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-run-httpd\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944305 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-nb\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944335 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-scripts\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944361 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-sb\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944378 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944426 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-svc\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944441 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-config-data\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944473 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5ld4\" (UniqueName: \"kubernetes.io/projected/3e71f22a-250c-48e2-8309-7dfeb1325a2b-kube-api-access-c5ld4\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.944507 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-config\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.945338 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-swift-storage-0\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.945455 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-config\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.946061 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-nb\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.946235 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-sb\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.946512 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.946783 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-svc\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.950019 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.950493 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.972116 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkbn\" (UniqueName: \"kubernetes.io/projected/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-kube-api-access-pbkbn\") pod \"dnsmasq-dns-668f78969f-gvgfh\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:50:59 crc kubenswrapper[4799]: I0216 12:50:59.977999 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d9fccfddf-b9jg7"] Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.013774 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.048973 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.066495 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-config-data\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.066656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.066772 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5ld4\" (UniqueName: \"kubernetes.io/projected/3e71f22a-250c-48e2-8309-7dfeb1325a2b-kube-api-access-c5ld4\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.066867 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/233e1940-0b00-4556-9a3c-c438d43a6816-logs\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.066993 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.067089 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.073556 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-log-httpd\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.073721 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.073815 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.073993 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hp8\" (UniqueName: \"kubernetes.io/projected/3f1a3af6-c025-4113-8967-3a8d48724ef9-kube-api-access-b9hp8\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.074180 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/233e1940-0b00-4556-9a3c-c438d43a6816-horizon-secret-key\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.079348 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.079505 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.079618 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4pzm\" (UniqueName: \"kubernetes.io/projected/233e1940-0b00-4556-9a3c-c438d43a6816-kube-api-access-k4pzm\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.079809 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-run-httpd\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.079946 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-scripts\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.080045 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.080160 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-config-data\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.080285 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.080392 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-scripts\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.075014 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-config-data\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.081092 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-run-httpd\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.075732 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-log-httpd\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.089972 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-scripts\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.106390 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.113338 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.117869 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5ld4\" (UniqueName: \"kubernetes.io/projected/3e71f22a-250c-48e2-8309-7dfeb1325a2b-kube-api-access-c5ld4\") pod \"ceilometer-0\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201543 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201606 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201663 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201690 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201710 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9hp8\" (UniqueName: \"kubernetes.io/projected/3f1a3af6-c025-4113-8967-3a8d48724ef9-kube-api-access-b9hp8\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201776 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/233e1940-0b00-4556-9a3c-c438d43a6816-horizon-secret-key\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201820 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4pzm\" (UniqueName: \"kubernetes.io/projected/233e1940-0b00-4556-9a3c-c438d43a6816-kube-api-access-k4pzm\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201840 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201922 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201945 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-config-data\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.201992 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-scripts\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.202063 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.202113 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/233e1940-0b00-4556-9a3c-c438d43a6816-logs\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.202851 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/233e1940-0b00-4556-9a3c-c438d43a6816-logs\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.203689 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.209361 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.215051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-scripts\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.215058 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.220256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-config-data\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.234260 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.235422 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.245522 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/233e1940-0b00-4556-9a3c-c438d43a6816-horizon-secret-key\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.245844 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4pzm\" (UniqueName: \"kubernetes.io/projected/233e1940-0b00-4556-9a3c-c438d43a6816-kube-api-access-k4pzm\") pod \"horizon-d9fccfddf-b9jg7\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.251112 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8gv69"] Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.258954 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.259403 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9hp8\" (UniqueName: \"kubernetes.io/projected/3f1a3af6-c025-4113-8967-3a8d48724ef9-kube-api-access-b9hp8\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.260324 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.275730 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.370725 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gv69" event={"ID":"6f0f021f-47b3-4b51-bee2-ce0121992d9f","Type":"ContainerStarted","Data":"f6ccb081af1ae8d1ad4536e2a2c5090ea19d519c70baa95455f0a0003a0cb556"} Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.373027 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.434501 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.436467 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.465138 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.910681 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:00 crc kubenswrapper[4799]: I0216 12:51:00.967186 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.016437 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d6894697-8g5lt"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.045153 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-775f9c4c9f-rxgkm"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.057433 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.084067 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.100221 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.112413 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65d49bd78c-pb22s"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.114846 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: W0216 12:51:01.131863 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc901ce2e_6b4a_464e_8679_72329a180956.slice/crio-8205ea8a0b956a3180c11cd0542c3918d8648fbe96ffe4c885365f7d5e74046a WatchSource:0}: Error finding container 8205ea8a0b956a3180c11cd0542c3918d8648fbe96ffe4c885365f7d5e74046a: Status 404 returned error can't find the container with id 8205ea8a0b956a3180c11cd0542c3918d8648fbe96ffe4c885365f7d5e74046a Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.186113 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65d49bd78c-pb22s"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.230694 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018a7587-e44d-4974-8e51-9241904ad7df-horizon-secret-key\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.230768 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86bp\" (UniqueName: \"kubernetes.io/projected/018a7587-e44d-4974-8e51-9241904ad7df-kube-api-access-s86bp\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.230920 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a7587-e44d-4974-8e51-9241904ad7df-logs\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.230986 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-config-data\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.231030 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-scripts\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.291742 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j8vxl"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.308445 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668f78969f-gvgfh"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.333869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a7587-e44d-4974-8e51-9241904ad7df-logs\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.333939 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-config-data\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.333979 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-scripts\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.334041 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018a7587-e44d-4974-8e51-9241904ad7df-horizon-secret-key\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.334070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s86bp\" (UniqueName: \"kubernetes.io/projected/018a7587-e44d-4974-8e51-9241904ad7df-kube-api-access-s86bp\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.334369 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a7587-e44d-4974-8e51-9241904ad7df-logs\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.335822 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-config-data\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.336300 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-scripts\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.348488 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m5dfr"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.349046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018a7587-e44d-4974-8e51-9241904ad7df-horizon-secret-key\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.366739 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s86bp\" (UniqueName: \"kubernetes.io/projected/018a7587-e44d-4974-8e51-9241904ad7df-kube-api-access-s86bp\") pod \"horizon-65d49bd78c-pb22s\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.475182 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" event={"ID":"51e33f8d-7dc3-4d9a-a6db-c005cae6f522","Type":"ContainerStarted","Data":"536ebb56e1a77c06b44b167427522abc1642ea92f0fc79600028e6126b4e011f"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.483063 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gv69" event={"ID":"6f0f021f-47b3-4b51-bee2-ce0121992d9f","Type":"ContainerStarted","Data":"0c75451f10a68da626c898a9ba324f74b9efc33a0a34ef25960ca38ff2ae70f3"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.485853 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rczq6"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.494144 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m5dfr" event={"ID":"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930","Type":"ContainerStarted","Data":"fb6fe4d75932c786ab0845afe32b34ddd9f017e4863891e7562221bb38868fad"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.540147 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" event={"ID":"8251c8d3-8cd6-4118-bb20-5bffe115cd32","Type":"ContainerStarted","Data":"f723549c4e7aa4354ffb5c8554f7a74c8b95df1fc3390a862df1c53bb6f472c1"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.546608 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j8vxl" event={"ID":"407468d3-5baf-4bde-af39-679ed83889c8","Type":"ContainerStarted","Data":"76cd0f088b32ddb9c57a86f36b69c59c78befaf626e2c64f49e1356433499546"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.552361 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.553875 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c901ce2e-6b4a-464e-8679-72329a180956","Type":"ContainerStarted","Data":"8205ea8a0b956a3180c11cd0542c3918d8648fbe96ffe4c885365f7d5e74046a"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.557619 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92cefdaf-4a4b-4771-9b15-0666298881e8","Type":"ContainerStarted","Data":"12bb85416e229ec0e8cabae18b740175c437404c6ca26909f7a8ba86961d56a6"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.582028 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x2bbw"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.590557 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rczq6" event={"ID":"03cbd43b-bc5a-4954-aa6f-1cb9440076a9","Type":"ContainerStarted","Data":"9a088da99d625b603d4ac3f727e98ea97071c1017b56f9a02ababfc66c897ae1"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.593531 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.596557 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9ef5643d-2fd2-478a-98bd-ed6217fa9b32","Type":"ContainerStarted","Data":"1e8d181ada264ffef02b5cab01376503beca7cc826cfe8fae6e69c81f27256d8"} Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.603600 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.617447 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.626170 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-775f9c4c9f-rxgkm"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.636334 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d9fccfddf-b9jg7"] Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.652504 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8gv69" podStartSLOduration=3.652482912 podStartE2EDuration="3.652482912s" podCreationTimestamp="2026-02-16 12:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:01.539100993 +0000 UTC m=+1167.132116337" watchObservedRunningTime="2026-02-16 12:51:01.652482912 +0000 UTC m=+1167.245498246" Feb 16 12:51:01 crc kubenswrapper[4799]: W0216 12:51:01.677424 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e71f22a_250c_48e2_8309_7dfeb1325a2b.slice/crio-018c6af4407a3b910e72e4027bf79f2cc50b5b2fda9264ea2f437f7910b13f66 WatchSource:0}: Error finding container 018c6af4407a3b910e72e4027bf79f2cc50b5b2fda9264ea2f437f7910b13f66: Status 404 returned error can't find the container with id 018c6af4407a3b910e72e4027bf79f2cc50b5b2fda9264ea2f437f7910b13f66 Feb 16 12:51:01 crc kubenswrapper[4799]: I0216 12:51:01.689614 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:01 crc kubenswrapper[4799]: W0216 12:51:01.789825 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae9f8668_6aca_49c6_9386_9adab98879a7.slice/crio-d55b276bd38c979ab6457fd9854e2a7b49e0b7aca08625a5d9775eda8cf4b2a0 WatchSource:0}: Error finding container d55b276bd38c979ab6457fd9854e2a7b49e0b7aca08625a5d9775eda8cf4b2a0: Status 404 returned error can't find the container with id d55b276bd38c979ab6457fd9854e2a7b49e0b7aca08625a5d9775eda8cf4b2a0 Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.626291 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"300e5319-9412-411d-8c94-5fbe2b001d54","Type":"ContainerStarted","Data":"a1215a8b11d1a4797d34073985526188a04c4107740147ecf59ffe29b394f09f"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.629883 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-775f9c4c9f-rxgkm" event={"ID":"ae9f8668-6aca-49c6-9386-9adab98879a7","Type":"ContainerStarted","Data":"d55b276bd38c979ab6457fd9854e2a7b49e0b7aca08625a5d9775eda8cf4b2a0"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.631996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j8vxl" event={"ID":"407468d3-5baf-4bde-af39-679ed83889c8","Type":"ContainerStarted","Data":"fa6c1b8da983e0dee2d661b347a35553f1dac406a6246a771e9f1cd59eb8dbea"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.639818 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2bbw" event={"ID":"e821341e-3e99-4606-a96d-00adad2f39fb","Type":"ContainerStarted","Data":"2859b4a8a026b718aff1b3509208d021d92122183b1274f26a6c39ef111fda96"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.642041 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65d49bd78c-pb22s"] Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.668257 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-j8vxl" podStartSLOduration=4.668235719 podStartE2EDuration="4.668235719s" podCreationTimestamp="2026-02-16 12:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:02.661147215 +0000 UTC m=+1168.254162559" watchObservedRunningTime="2026-02-16 12:51:02.668235719 +0000 UTC m=+1168.261251053" Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.679295 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c901ce2e-6b4a-464e-8679-72329a180956","Type":"ContainerStarted","Data":"e05b93d4834312107d2628730b65f64477261c59e5c6d751595dfb648dc1c80a"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.679350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c901ce2e-6b4a-464e-8679-72329a180956","Type":"ContainerStarted","Data":"aa295904e5bbc2812cc320eca894cbd60c3092c40678bc66fde2fda3d527b5bc"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.679509 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api-log" containerID="cri-o://aa295904e5bbc2812cc320eca894cbd60c3092c40678bc66fde2fda3d527b5bc" gracePeriod=30 Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.680946 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api" containerID="cri-o://e05b93d4834312107d2628730b65f64477261c59e5c6d751595dfb648dc1c80a" gracePeriod=30 Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.681043 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.685483 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.159:9322/\": dial tcp 10.217.0.159:9322: connect: connection refused" Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.688178 4799 generic.go:334] "Generic (PLEG): container finished" podID="8251c8d3-8cd6-4118-bb20-5bffe115cd32" containerID="5e0259b605d5bd9ae6ae6ef4818a4d751130bb7ca852cf3bc3a79992823f2e1f" exitCode=0 Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.688239 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" event={"ID":"8251c8d3-8cd6-4118-bb20-5bffe115cd32","Type":"ContainerDied","Data":"5e0259b605d5bd9ae6ae6ef4818a4d751130bb7ca852cf3bc3a79992823f2e1f"} Feb 16 12:51:02 crc kubenswrapper[4799]: W0216 12:51:02.693847 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018a7587_e44d_4974_8e51_9241904ad7df.slice/crio-6a098cbff2549ede61b151f9b05efd2cfd4fa2c0764107ad61097799f52efc02 WatchSource:0}: Error finding container 6a098cbff2549ede61b151f9b05efd2cfd4fa2c0764107ad61097799f52efc02: Status 404 returned error can't find the container with id 6a098cbff2549ede61b151f9b05efd2cfd4fa2c0764107ad61097799f52efc02 Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.694024 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d9fccfddf-b9jg7" event={"ID":"233e1940-0b00-4556-9a3c-c438d43a6816","Type":"ContainerStarted","Data":"8bafcdf3f2c28cb5ed0fb030a63e36e5f01d66db2dd8d5a41d252334be976b2f"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.696655 4799 generic.go:334] "Generic (PLEG): container finished" podID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerID="7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7" exitCode=0 Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.696726 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" event={"ID":"51e33f8d-7dc3-4d9a-a6db-c005cae6f522","Type":"ContainerDied","Data":"7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.720218 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.7201801020000005 podStartE2EDuration="4.720180102s" podCreationTimestamp="2026-02-16 12:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:02.703956065 +0000 UTC m=+1168.296971399" watchObservedRunningTime="2026-02-16 12:51:02.720180102 +0000 UTC m=+1168.313195426" Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.720609 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e71f22a-250c-48e2-8309-7dfeb1325a2b","Type":"ContainerStarted","Data":"018c6af4407a3b910e72e4027bf79f2cc50b5b2fda9264ea2f437f7910b13f66"} Feb 16 12:51:02 crc kubenswrapper[4799]: I0216 12:51:02.727192 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f1a3af6-c025-4113-8967-3a8d48724ef9","Type":"ContainerStarted","Data":"4eaa99915f8f6742666b9c6836082aa62dacd2766742fd4d833700380feb7a46"} Feb 16 12:51:03 crc kubenswrapper[4799]: I0216 12:51:03.739286 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"300e5319-9412-411d-8c94-5fbe2b001d54","Type":"ContainerStarted","Data":"b3f1df723568629195e41d1a339702a8b6160ef2a42cf4d79e628be311effae8"} Feb 16 12:51:03 crc kubenswrapper[4799]: I0216 12:51:03.741775 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d49bd78c-pb22s" event={"ID":"018a7587-e44d-4974-8e51-9241904ad7df","Type":"ContainerStarted","Data":"6a098cbff2549ede61b151f9b05efd2cfd4fa2c0764107ad61097799f52efc02"} Feb 16 12:51:03 crc kubenswrapper[4799]: I0216 12:51:03.744045 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f1a3af6-c025-4113-8967-3a8d48724ef9","Type":"ContainerStarted","Data":"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030"} Feb 16 12:51:03 crc kubenswrapper[4799]: I0216 12:51:03.746796 4799 generic.go:334] "Generic (PLEG): container finished" podID="c901ce2e-6b4a-464e-8679-72329a180956" containerID="aa295904e5bbc2812cc320eca894cbd60c3092c40678bc66fde2fda3d527b5bc" exitCode=143 Feb 16 12:51:03 crc kubenswrapper[4799]: I0216 12:51:03.747582 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c901ce2e-6b4a-464e-8679-72329a180956","Type":"ContainerDied","Data":"aa295904e5bbc2812cc320eca894cbd60c3092c40678bc66fde2fda3d527b5bc"} Feb 16 12:51:04 crc kubenswrapper[4799]: I0216 12:51:04.410555 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.080038 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.622589 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d9fccfddf-b9jg7"] Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.662087 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6746fc7768-pc68r"] Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.663845 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.666084 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.672939 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6746fc7768-pc68r"] Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725556 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5357e09b-7a51-4687-be1c-99a473120c90-logs\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-tls-certs\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-combined-ca-bundle\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725672 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmv4f\" (UniqueName: \"kubernetes.io/projected/5357e09b-7a51-4687-be1c-99a473120c90-kube-api-access-tmv4f\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725706 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-secret-key\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725733 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-scripts\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.725834 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-config-data\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.735915 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65d49bd78c-pb22s"] Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.756723 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b64799464-xwrv9"] Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.758287 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.774972 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b64799464-xwrv9"] Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.830208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa66dcb2-43c2-4824-80f8-30911a4a8c72-config-data\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.830249 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lvh\" (UniqueName: \"kubernetes.io/projected/aa66dcb2-43c2-4824-80f8-30911a4a8c72-kube-api-access-98lvh\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.830447 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa66dcb2-43c2-4824-80f8-30911a4a8c72-scripts\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.830497 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-combined-ca-bundle\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.830549 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-config-data\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.830999 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5357e09b-7a51-4687-be1c-99a473120c90-logs\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-tls-certs\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831628 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-combined-ca-bundle\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831693 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmv4f\" (UniqueName: \"kubernetes.io/projected/5357e09b-7a51-4687-be1c-99a473120c90-kube-api-access-tmv4f\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831761 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-horizon-secret-key\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831803 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-secret-key\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831866 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-scripts\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831936 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-horizon-tls-certs\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.831968 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa66dcb2-43c2-4824-80f8-30911a4a8c72-logs\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.832256 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-config-data\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.832508 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5357e09b-7a51-4687-be1c-99a473120c90-logs\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.832921 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-scripts\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.840418 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-combined-ca-bundle\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.841170 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-secret-key\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.842372 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-tls-certs\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.853546 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmv4f\" (UniqueName: \"kubernetes.io/projected/5357e09b-7a51-4687-be1c-99a473120c90-kube-api-access-tmv4f\") pod \"horizon-6746fc7768-pc68r\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934142 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-horizon-tls-certs\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934210 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa66dcb2-43c2-4824-80f8-30911a4a8c72-logs\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934239 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa66dcb2-43c2-4824-80f8-30911a4a8c72-config-data\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934261 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98lvh\" (UniqueName: \"kubernetes.io/projected/aa66dcb2-43c2-4824-80f8-30911a4a8c72-kube-api-access-98lvh\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934333 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa66dcb2-43c2-4824-80f8-30911a4a8c72-scripts\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934375 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-combined-ca-bundle\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.934495 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-horizon-secret-key\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.936261 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa66dcb2-43c2-4824-80f8-30911a4a8c72-scripts\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.936950 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa66dcb2-43c2-4824-80f8-30911a4a8c72-logs\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.940798 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa66dcb2-43c2-4824-80f8-30911a4a8c72-config-data\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.943035 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-combined-ca-bundle\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.946462 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-horizon-tls-certs\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.955355 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa66dcb2-43c2-4824-80f8-30911a4a8c72-horizon-secret-key\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:07 crc kubenswrapper[4799]: I0216 12:51:07.959396 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98lvh\" (UniqueName: \"kubernetes.io/projected/aa66dcb2-43c2-4824-80f8-30911a4a8c72-kube-api-access-98lvh\") pod \"horizon-7b64799464-xwrv9\" (UID: \"aa66dcb2-43c2-4824-80f8-30911a4a8c72\") " pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:08 crc kubenswrapper[4799]: I0216 12:51:08.023195 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:08 crc kubenswrapper[4799]: I0216 12:51:08.078569 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.818680 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.825355 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" event={"ID":"8251c8d3-8cd6-4118-bb20-5bffe115cd32","Type":"ContainerDied","Data":"f723549c4e7aa4354ffb5c8554f7a74c8b95df1fc3390a862df1c53bb6f472c1"} Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.825421 4799 scope.go:117] "RemoveContainer" containerID="5e0259b605d5bd9ae6ae6ef4818a4d751130bb7ca852cf3bc3a79992823f2e1f" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.825425 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d6894697-8g5lt" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.899198 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-svc\") pod \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.899332 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-nb\") pod \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.899443 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-swift-storage-0\") pod \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.899668 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-sb\") pod \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.900059 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-config\") pod \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.900177 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzmdp\" (UniqueName: \"kubernetes.io/projected/8251c8d3-8cd6-4118-bb20-5bffe115cd32-kube-api-access-jzmdp\") pod \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\" (UID: \"8251c8d3-8cd6-4118-bb20-5bffe115cd32\") " Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.905763 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8251c8d3-8cd6-4118-bb20-5bffe115cd32-kube-api-access-jzmdp" (OuterVolumeSpecName: "kube-api-access-jzmdp") pod "8251c8d3-8cd6-4118-bb20-5bffe115cd32" (UID: "8251c8d3-8cd6-4118-bb20-5bffe115cd32"). InnerVolumeSpecName "kube-api-access-jzmdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.932053 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-config" (OuterVolumeSpecName: "config") pod "8251c8d3-8cd6-4118-bb20-5bffe115cd32" (UID: "8251c8d3-8cd6-4118-bb20-5bffe115cd32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.932525 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8251c8d3-8cd6-4118-bb20-5bffe115cd32" (UID: "8251c8d3-8cd6-4118-bb20-5bffe115cd32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.934572 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8251c8d3-8cd6-4118-bb20-5bffe115cd32" (UID: "8251c8d3-8cd6-4118-bb20-5bffe115cd32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.944998 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8251c8d3-8cd6-4118-bb20-5bffe115cd32" (UID: "8251c8d3-8cd6-4118-bb20-5bffe115cd32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:10 crc kubenswrapper[4799]: I0216 12:51:10.949249 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8251c8d3-8cd6-4118-bb20-5bffe115cd32" (UID: "8251c8d3-8cd6-4118-bb20-5bffe115cd32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.002658 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzmdp\" (UniqueName: \"kubernetes.io/projected/8251c8d3-8cd6-4118-bb20-5bffe115cd32-kube-api-access-jzmdp\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.002692 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.002707 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.002720 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.002735 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.002747 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8251c8d3-8cd6-4118-bb20-5bffe115cd32-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.194047 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d6894697-8g5lt"] Feb 16 12:51:11 crc kubenswrapper[4799]: I0216 12:51:11.203286 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54d6894697-8g5lt"] Feb 16 12:51:12 crc kubenswrapper[4799]: I0216 12:51:12.853912 4799 generic.go:334] "Generic (PLEG): container finished" podID="6f0f021f-47b3-4b51-bee2-ce0121992d9f" containerID="0c75451f10a68da626c898a9ba324f74b9efc33a0a34ef25960ca38ff2ae70f3" exitCode=0 Feb 16 12:51:12 crc kubenswrapper[4799]: I0216 12:51:12.854002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gv69" event={"ID":"6f0f021f-47b3-4b51-bee2-ce0121992d9f","Type":"ContainerDied","Data":"0c75451f10a68da626c898a9ba324f74b9efc33a0a34ef25960ca38ff2ae70f3"} Feb 16 12:51:13 crc kubenswrapper[4799]: I0216 12:51:13.182912 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8251c8d3-8cd6-4118-bb20-5bffe115cd32" path="/var/lib/kubelet/pods/8251c8d3-8cd6-4118-bb20-5bffe115cd32/volumes" Feb 16 12:51:15 crc kubenswrapper[4799]: E0216 12:51:15.681531 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest" Feb 16 12:51:15 crc kubenswrapper[4799]: E0216 12:51:15.681867 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest" Feb 16 12:51:15 crc kubenswrapper[4799]: E0216 12:51:15.682184 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-decision-engine,Image:38.102.83.119:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc5h598h6fh86hb7h595h695hc9hdhbch54fh5c7h685h5f4hd4h5dfh554h6dh5h98h89h68h678hcfh66h64bh67dh59fhb7h5c8h56bh8fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-decision-engine-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:custom-prometheus-ca,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/prometheus/ca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzzxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-decision-engine-0_openstack(9ef5643d-2fd2-478a-98bd-ed6217fa9b32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:15 crc kubenswrapper[4799]: E0216 12:51:15.683937 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-decision-engine-0" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" Feb 16 12:51:15 crc kubenswrapper[4799]: E0216 12:51:15.890818 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest\\\"\"" pod="openstack/watcher-decision-engine-0" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.960649 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.961322 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.961481 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54h659h576hb5h5f4h644h5cchbbh594h5dfh85hf6hcch57dh58hfbh9fh674h68chb8h584h5ffh5f8h5fch5bbh696h5c9h55dh668h6fh669h5dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfmxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-775f9c4c9f-rxgkm_openstack(ae9f8668-6aca-49c6-9386-9adab98879a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.973740 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-775f9c4c9f-rxgkm" podUID="ae9f8668-6aca-49c6-9386-9adab98879a7" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.988738 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.988785 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.988900 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h5c9h8ch566h54h4h5cdhdchch645h6dh5b9h5b4h578h8dhc9h588h64chb4h99h69h597h5d9h99h64dh598h555h59h687h68bh68bh59cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-d9fccfddf-b9jg7_openstack(233e1940-0b00-4556-9a3c-c438d43a6816): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.991430 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.991473 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 12:51:32 crc kubenswrapper[4799]: E0216 12:51:32.991574 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bdh667h688h5bfh568h547h556h5d4hb9h66ch575h564h5fch5d6h599h87h5b4h645h5bchbh5c7h5f8h7h689h7fh5b6h667h85h9hb5h68h594q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s86bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65d49bd78c-pb22s_openstack(018a7587-e44d-4974-8e51-9241904ad7df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:33 crc kubenswrapper[4799]: E0216 12:51:33.009571 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-d9fccfddf-b9jg7" podUID="233e1940-0b00-4556-9a3c-c438d43a6816" Feb 16 12:51:33 crc kubenswrapper[4799]: E0216 12:51:33.009713 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-65d49bd78c-pb22s" podUID="018a7587-e44d-4974-8e51-9241904ad7df" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.070029 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8gv69" event={"ID":"6f0f021f-47b3-4b51-bee2-ce0121992d9f","Type":"ContainerDied","Data":"f6ccb081af1ae8d1ad4536e2a2c5090ea19d519c70baa95455f0a0003a0cb556"} Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.070321 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ccb081af1ae8d1ad4536e2a2c5090ea19d519c70baa95455f0a0003a0cb556" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.075273 4799 generic.go:334] "Generic (PLEG): container finished" podID="c901ce2e-6b4a-464e-8679-72329a180956" containerID="e05b93d4834312107d2628730b65f64477261c59e5c6d751595dfb648dc1c80a" exitCode=137 Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.075399 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c901ce2e-6b4a-464e-8679-72329a180956","Type":"ContainerDied","Data":"e05b93d4834312107d2628730b65f64477261c59e5c6d751595dfb648dc1c80a"} Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.082429 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.170802 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-config-data\") pod \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.170873 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-fernet-keys\") pod \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.170990 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8rbm\" (UniqueName: \"kubernetes.io/projected/6f0f021f-47b3-4b51-bee2-ce0121992d9f-kube-api-access-q8rbm\") pod \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.171107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-combined-ca-bundle\") pod \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.171209 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-credential-keys\") pod \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.171259 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-scripts\") pod \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\" (UID: \"6f0f021f-47b3-4b51-bee2-ce0121992d9f\") " Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.179271 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6f0f021f-47b3-4b51-bee2-ce0121992d9f" (UID: "6f0f021f-47b3-4b51-bee2-ce0121992d9f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.180760 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6f0f021f-47b3-4b51-bee2-ce0121992d9f" (UID: "6f0f021f-47b3-4b51-bee2-ce0121992d9f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.208520 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-scripts" (OuterVolumeSpecName: "scripts") pod "6f0f021f-47b3-4b51-bee2-ce0121992d9f" (UID: "6f0f021f-47b3-4b51-bee2-ce0121992d9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.215337 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0f021f-47b3-4b51-bee2-ce0121992d9f-kube-api-access-q8rbm" (OuterVolumeSpecName: "kube-api-access-q8rbm") pod "6f0f021f-47b3-4b51-bee2-ce0121992d9f" (UID: "6f0f021f-47b3-4b51-bee2-ce0121992d9f"). InnerVolumeSpecName "kube-api-access-q8rbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.224007 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-config-data" (OuterVolumeSpecName: "config-data") pod "6f0f021f-47b3-4b51-bee2-ce0121992d9f" (UID: "6f0f021f-47b3-4b51-bee2-ce0121992d9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.229041 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f0f021f-47b3-4b51-bee2-ce0121992d9f" (UID: "6f0f021f-47b3-4b51-bee2-ce0121992d9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.273718 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8rbm\" (UniqueName: \"kubernetes.io/projected/6f0f021f-47b3-4b51-bee2-ce0121992d9f-kube-api-access-q8rbm\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.273763 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.273779 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.273794 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.273811 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:33 crc kubenswrapper[4799]: I0216 12:51:33.273827 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f0f021f-47b3-4b51-bee2-ce0121992d9f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:33 crc kubenswrapper[4799]: E0216 12:51:33.624612 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 16 12:51:33 crc kubenswrapper[4799]: E0216 12:51:33.624966 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 16 12:51:33 crc kubenswrapper[4799]: E0216 12:51:33.625145 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.119:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59h5d6hc6h67fh95h5cdh55ch96h699h65fh54h5b9h5cbh99h56h9fh54bh557h57bh5cch584h8fh64dh665h6h79h64fhd6h65hc5h8h67bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5ld4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3e71f22a-250c-48e2-8309-7dfeb1325a2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.083622 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8gv69" Feb 16 12:51:34 crc kubenswrapper[4799]: E0216 12:51:34.210106 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 16 12:51:34 crc kubenswrapper[4799]: E0216 12:51:34.210710 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 16 12:51:34 crc kubenswrapper[4799]: E0216 12:51:34.210961 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.119:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl52n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-x2bbw_openstack(e821341e-3e99-4606-a96d-00adad2f39fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:34 crc kubenswrapper[4799]: E0216 12:51:34.212829 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-x2bbw" podUID="e821341e-3e99-4606-a96d-00adad2f39fb" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.253591 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8gv69"] Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.262212 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8gv69"] Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.373428 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bgzm8"] Feb 16 12:51:34 crc kubenswrapper[4799]: E0216 12:51:34.377668 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0f021f-47b3-4b51-bee2-ce0121992d9f" containerName="keystone-bootstrap" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.377705 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0f021f-47b3-4b51-bee2-ce0121992d9f" containerName="keystone-bootstrap" Feb 16 12:51:34 crc kubenswrapper[4799]: E0216 12:51:34.377741 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8251c8d3-8cd6-4118-bb20-5bffe115cd32" containerName="init" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.377747 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8251c8d3-8cd6-4118-bb20-5bffe115cd32" containerName="init" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.378087 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0f021f-47b3-4b51-bee2-ce0121992d9f" containerName="keystone-bootstrap" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.378106 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8251c8d3-8cd6-4118-bb20-5bffe115cd32" containerName="init" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.378973 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.383648 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqmr2" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.384138 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.384274 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.384381 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.384517 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.387031 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bgzm8"] Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.506220 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-fernet-keys\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.506605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-config-data\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.506685 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-scripts\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.506748 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmzlp\" (UniqueName: \"kubernetes.io/projected/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-kube-api-access-vmzlp\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.506814 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-credential-keys\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.506941 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-combined-ca-bundle\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.609207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-combined-ca-bundle\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.609287 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-fernet-keys\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.609382 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-config-data\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.609408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-scripts\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.609435 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmzlp\" (UniqueName: \"kubernetes.io/projected/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-kube-api-access-vmzlp\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.609461 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-credential-keys\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.640737 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-scripts\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.640919 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-fernet-keys\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.641016 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-credential-keys\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.641258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-combined-ca-bundle\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.641551 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-config-data\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.650682 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmzlp\" (UniqueName: \"kubernetes.io/projected/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-kube-api-access-vmzlp\") pod \"keystone-bootstrap-bgzm8\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:34 crc kubenswrapper[4799]: I0216 12:51:34.708153 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:35 crc kubenswrapper[4799]: E0216 12:51:35.099878 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-x2bbw" podUID="e821341e-3e99-4606-a96d-00adad2f39fb" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.164114 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0f021f-47b3-4b51-bee2-ce0121992d9f" path="/var/lib/kubelet/pods/6f0f021f-47b3-4b51-bee2-ce0121992d9f/volumes" Feb 16 12:51:35 crc kubenswrapper[4799]: E0216 12:51:35.601090 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 16 12:51:35 crc kubenswrapper[4799]: E0216 12:51:35.601444 4799 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.119:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 16 12:51:35 crc kubenswrapper[4799]: E0216 12:51:35.601599 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.119:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dkjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-m5dfr_openstack(8e3d6bd7-bfe0-4951-8c70-ae25e5a07930): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 12:51:35 crc kubenswrapper[4799]: E0216 12:51:35.603395 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-m5dfr" podUID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.752300 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.761218 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.787687 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.789961 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.842709 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4pzm\" (UniqueName: \"kubernetes.io/projected/233e1940-0b00-4556-9a3c-c438d43a6816-kube-api-access-k4pzm\") pod \"233e1940-0b00-4556-9a3c-c438d43a6816\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843019 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-config-data\") pod \"233e1940-0b00-4556-9a3c-c438d43a6816\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843060 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-config-data\") pod \"018a7587-e44d-4974-8e51-9241904ad7df\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843095 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s86bp\" (UniqueName: \"kubernetes.io/projected/018a7587-e44d-4974-8e51-9241904ad7df-kube-api-access-s86bp\") pod \"018a7587-e44d-4974-8e51-9241904ad7df\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843181 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a7587-e44d-4974-8e51-9241904ad7df-logs\") pod \"018a7587-e44d-4974-8e51-9241904ad7df\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843233 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-scripts\") pod \"018a7587-e44d-4974-8e51-9241904ad7df\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843381 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-scripts\") pod \"233e1940-0b00-4556-9a3c-c438d43a6816\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843454 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/233e1940-0b00-4556-9a3c-c438d43a6816-horizon-secret-key\") pod \"233e1940-0b00-4556-9a3c-c438d43a6816\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018a7587-e44d-4974-8e51-9241904ad7df-horizon-secret-key\") pod \"018a7587-e44d-4974-8e51-9241904ad7df\" (UID: \"018a7587-e44d-4974-8e51-9241904ad7df\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.843584 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/233e1940-0b00-4556-9a3c-c438d43a6816-logs\") pod \"233e1940-0b00-4556-9a3c-c438d43a6816\" (UID: \"233e1940-0b00-4556-9a3c-c438d43a6816\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.844312 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018a7587-e44d-4974-8e51-9241904ad7df-logs" (OuterVolumeSpecName: "logs") pod "018a7587-e44d-4974-8e51-9241904ad7df" (UID: "018a7587-e44d-4974-8e51-9241904ad7df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.844576 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-scripts" (OuterVolumeSpecName: "scripts") pod "233e1940-0b00-4556-9a3c-c438d43a6816" (UID: "233e1940-0b00-4556-9a3c-c438d43a6816"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.844974 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.845037 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018a7587-e44d-4974-8e51-9241904ad7df-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.845030 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-config-data" (OuterVolumeSpecName: "config-data") pod "018a7587-e44d-4974-8e51-9241904ad7df" (UID: "018a7587-e44d-4974-8e51-9241904ad7df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.846259 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/233e1940-0b00-4556-9a3c-c438d43a6816-logs" (OuterVolumeSpecName: "logs") pod "233e1940-0b00-4556-9a3c-c438d43a6816" (UID: "233e1940-0b00-4556-9a3c-c438d43a6816"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.846329 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-config-data" (OuterVolumeSpecName: "config-data") pod "233e1940-0b00-4556-9a3c-c438d43a6816" (UID: "233e1940-0b00-4556-9a3c-c438d43a6816"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.847955 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-scripts" (OuterVolumeSpecName: "scripts") pod "018a7587-e44d-4974-8e51-9241904ad7df" (UID: "018a7587-e44d-4974-8e51-9241904ad7df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.864889 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018a7587-e44d-4974-8e51-9241904ad7df-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "018a7587-e44d-4974-8e51-9241904ad7df" (UID: "018a7587-e44d-4974-8e51-9241904ad7df"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.865004 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233e1940-0b00-4556-9a3c-c438d43a6816-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "233e1940-0b00-4556-9a3c-c438d43a6816" (UID: "233e1940-0b00-4556-9a3c-c438d43a6816"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.865233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233e1940-0b00-4556-9a3c-c438d43a6816-kube-api-access-k4pzm" (OuterVolumeSpecName: "kube-api-access-k4pzm") pod "233e1940-0b00-4556-9a3c-c438d43a6816" (UID: "233e1940-0b00-4556-9a3c-c438d43a6816"). InnerVolumeSpecName "kube-api-access-k4pzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.865744 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018a7587-e44d-4974-8e51-9241904ad7df-kube-api-access-s86bp" (OuterVolumeSpecName: "kube-api-access-s86bp") pod "018a7587-e44d-4974-8e51-9241904ad7df" (UID: "018a7587-e44d-4974-8e51-9241904ad7df"). InnerVolumeSpecName "kube-api-access-s86bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.945844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-config-data\") pod \"ae9f8668-6aca-49c6-9386-9adab98879a7\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.945946 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-custom-prometheus-ca\") pod \"c901ce2e-6b4a-464e-8679-72329a180956\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946156 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9f8668-6aca-49c6-9386-9adab98879a7-logs\") pod \"ae9f8668-6aca-49c6-9386-9adab98879a7\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946177 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-config-data\") pod \"c901ce2e-6b4a-464e-8679-72329a180956\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946228 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-scripts\") pod \"ae9f8668-6aca-49c6-9386-9adab98879a7\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946254 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae9f8668-6aca-49c6-9386-9adab98879a7-horizon-secret-key\") pod \"ae9f8668-6aca-49c6-9386-9adab98879a7\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946343 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pg7p\" (UniqueName: \"kubernetes.io/projected/c901ce2e-6b4a-464e-8679-72329a180956-kube-api-access-2pg7p\") pod \"c901ce2e-6b4a-464e-8679-72329a180956\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946394 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfmxw\" (UniqueName: \"kubernetes.io/projected/ae9f8668-6aca-49c6-9386-9adab98879a7-kube-api-access-bfmxw\") pod \"ae9f8668-6aca-49c6-9386-9adab98879a7\" (UID: \"ae9f8668-6aca-49c6-9386-9adab98879a7\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946427 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c901ce2e-6b4a-464e-8679-72329a180956-logs\") pod \"c901ce2e-6b4a-464e-8679-72329a180956\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.946500 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-combined-ca-bundle\") pod \"c901ce2e-6b4a-464e-8679-72329a180956\" (UID: \"c901ce2e-6b4a-464e-8679-72329a180956\") " Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.947137 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-config-data" (OuterVolumeSpecName: "config-data") pod "ae9f8668-6aca-49c6-9386-9adab98879a7" (UID: "ae9f8668-6aca-49c6-9386-9adab98879a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.947139 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae9f8668-6aca-49c6-9386-9adab98879a7-logs" (OuterVolumeSpecName: "logs") pod "ae9f8668-6aca-49c6-9386-9adab98879a7" (UID: "ae9f8668-6aca-49c6-9386-9adab98879a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.948107 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-scripts" (OuterVolumeSpecName: "scripts") pod "ae9f8668-6aca-49c6-9386-9adab98879a7" (UID: "ae9f8668-6aca-49c6-9386-9adab98879a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.948528 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c901ce2e-6b4a-464e-8679-72329a180956-logs" (OuterVolumeSpecName: "logs") pod "c901ce2e-6b4a-464e-8679-72329a180956" (UID: "c901ce2e-6b4a-464e-8679-72329a180956"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.949545 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/233e1940-0b00-4556-9a3c-c438d43a6816-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.949892 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4pzm\" (UniqueName: \"kubernetes.io/projected/233e1940-0b00-4556-9a3c-c438d43a6816-kube-api-access-k4pzm\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.949926 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9f8668-6aca-49c6-9386-9adab98879a7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.949941 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/233e1940-0b00-4556-9a3c-c438d43a6816-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.949956 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.949989 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s86bp\" (UniqueName: \"kubernetes.io/projected/018a7587-e44d-4974-8e51-9241904ad7df-kube-api-access-s86bp\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.950001 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.950011 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018a7587-e44d-4974-8e51-9241904ad7df-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.950024 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae9f8668-6aca-49c6-9386-9adab98879a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.950035 4799 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/233e1940-0b00-4556-9a3c-c438d43a6816-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.950048 4799 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018a7587-e44d-4974-8e51-9241904ad7df-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.952367 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c901ce2e-6b4a-464e-8679-72329a180956-kube-api-access-2pg7p" (OuterVolumeSpecName: "kube-api-access-2pg7p") pod "c901ce2e-6b4a-464e-8679-72329a180956" (UID: "c901ce2e-6b4a-464e-8679-72329a180956"). InnerVolumeSpecName "kube-api-access-2pg7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.952409 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae9f8668-6aca-49c6-9386-9adab98879a7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ae9f8668-6aca-49c6-9386-9adab98879a7" (UID: "ae9f8668-6aca-49c6-9386-9adab98879a7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.958290 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae9f8668-6aca-49c6-9386-9adab98879a7-kube-api-access-bfmxw" (OuterVolumeSpecName: "kube-api-access-bfmxw") pod "ae9f8668-6aca-49c6-9386-9adab98879a7" (UID: "ae9f8668-6aca-49c6-9386-9adab98879a7"). InnerVolumeSpecName "kube-api-access-bfmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:35 crc kubenswrapper[4799]: I0216 12:51:35.996335 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c901ce2e-6b4a-464e-8679-72329a180956" (UID: "c901ce2e-6b4a-464e-8679-72329a180956"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:35.999947 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c901ce2e-6b4a-464e-8679-72329a180956" (UID: "c901ce2e-6b4a-464e-8679-72329a180956"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.017785 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-config-data" (OuterVolumeSpecName: "config-data") pod "c901ce2e-6b4a-464e-8679-72329a180956" (UID: "c901ce2e-6b4a-464e-8679-72329a180956"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052149 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfmxw\" (UniqueName: \"kubernetes.io/projected/ae9f8668-6aca-49c6-9386-9adab98879a7-kube-api-access-bfmxw\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052187 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c901ce2e-6b4a-464e-8679-72329a180956-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052200 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052229 4799 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052240 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901ce2e-6b4a-464e-8679-72329a180956-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052250 4799 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae9f8668-6aca-49c6-9386-9adab98879a7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.052263 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pg7p\" (UniqueName: \"kubernetes.io/projected/c901ce2e-6b4a-464e-8679-72329a180956-kube-api-access-2pg7p\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.110422 4799 generic.go:334] "Generic (PLEG): container finished" podID="407468d3-5baf-4bde-af39-679ed83889c8" containerID="fa6c1b8da983e0dee2d661b347a35553f1dac406a6246a771e9f1cd59eb8dbea" exitCode=0 Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.110493 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j8vxl" event={"ID":"407468d3-5baf-4bde-af39-679ed83889c8","Type":"ContainerDied","Data":"fa6c1b8da983e0dee2d661b347a35553f1dac406a6246a771e9f1cd59eb8dbea"} Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.113763 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.113779 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c901ce2e-6b4a-464e-8679-72329a180956","Type":"ContainerDied","Data":"8205ea8a0b956a3180c11cd0542c3918d8648fbe96ffe4c885365f7d5e74046a"} Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.113842 4799 scope.go:117] "RemoveContainer" containerID="e05b93d4834312107d2628730b65f64477261c59e5c6d751595dfb648dc1c80a" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.117504 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d49bd78c-pb22s" event={"ID":"018a7587-e44d-4974-8e51-9241904ad7df","Type":"ContainerDied","Data":"6a098cbff2549ede61b151f9b05efd2cfd4fa2c0764107ad61097799f52efc02"} Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.117592 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d49bd78c-pb22s" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.120880 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-775f9c4c9f-rxgkm" event={"ID":"ae9f8668-6aca-49c6-9386-9adab98879a7","Type":"ContainerDied","Data":"d55b276bd38c979ab6457fd9854e2a7b49e0b7aca08625a5d9775eda8cf4b2a0"} Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.120995 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-775f9c4c9f-rxgkm" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.122604 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d9fccfddf-b9jg7" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.122655 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d9fccfddf-b9jg7" event={"ID":"233e1940-0b00-4556-9a3c-c438d43a6816","Type":"ContainerDied","Data":"8bafcdf3f2c28cb5ed0fb030a63e36e5f01d66db2dd8d5a41d252334be976b2f"} Feb 16 12:51:36 crc kubenswrapper[4799]: E0216 12:51:36.124699 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.119:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-m5dfr" podUID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.185292 4799 scope.go:117] "RemoveContainer" containerID="a2297f9d7899b991375a02e0894dc71d584445c6f8e3bb2834d711eaa331450e" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.239230 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d9fccfddf-b9jg7"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.252392 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d9fccfddf-b9jg7"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.274489 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b64799464-xwrv9"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.278807 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.299888 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.314914 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:36 crc kubenswrapper[4799]: E0216 12:51:36.315895 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api-log" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.315907 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api-log" Feb 16 12:51:36 crc kubenswrapper[4799]: E0216 12:51:36.315929 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.315935 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.316089 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.316113 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api-log" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.317145 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.319702 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.335176 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6746fc7768-pc68r"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.366861 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:36 crc kubenswrapper[4799]: W0216 12:51:36.374715 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5357e09b_7a51_4687_be1c_99a473120c90.slice/crio-0c48f7120a195adfe4901873eed7a5533e82b99dbd648265975772d29a8c49a9 WatchSource:0}: Error finding container 0c48f7120a195adfe4901873eed7a5533e82b99dbd648265975772d29a8c49a9: Status 404 returned error can't find the container with id 0c48f7120a195adfe4901873eed7a5533e82b99dbd648265975772d29a8c49a9 Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.388019 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-775f9c4c9f-rxgkm"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.401056 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-775f9c4c9f-rxgkm"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.419879 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65d49bd78c-pb22s"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.429512 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65d49bd78c-pb22s"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.456216 4799 scope.go:117] "RemoveContainer" containerID="93f08f37416c71d9a0847878be8a94fdfe19841fe3d2f77c684e8fb95010752e" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.460340 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.460398 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5153cc1-228a-4731-adc9-dbdde3ae1661-logs\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.460423 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-config-data\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.460469 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92w6\" (UniqueName: \"kubernetes.io/projected/b5153cc1-228a-4731-adc9-dbdde3ae1661-kube-api-access-h92w6\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.461012 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.494600 4799 scope.go:117] "RemoveContainer" containerID="aa295904e5bbc2812cc320eca894cbd60c3092c40678bc66fde2fda3d527b5bc" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.514109 4799 scope.go:117] "RemoveContainer" containerID="eb02e4defab5ed04e1e0c24a6e8f26d351acccd981ccbb6a298d6f0333373787" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.562682 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.562765 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.562799 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5153cc1-228a-4731-adc9-dbdde3ae1661-logs\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.562837 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-config-data\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.562898 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92w6\" (UniqueName: \"kubernetes.io/projected/b5153cc1-228a-4731-adc9-dbdde3ae1661-kube-api-access-h92w6\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.563844 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5153cc1-228a-4731-adc9-dbdde3ae1661-logs\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.567652 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.567998 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.581965 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-config-data\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.598947 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92w6\" (UniqueName: \"kubernetes.io/projected/b5153cc1-228a-4731-adc9-dbdde3ae1661-kube-api-access-h92w6\") pod \"watcher-api-0\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.654689 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.941009 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bgzm8"] Feb 16 12:51:36 crc kubenswrapper[4799]: I0216 12:51:36.983389 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.152544 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-log" containerID="cri-o://f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030" gracePeriod=30 Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.152866 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-httpd" containerID="cri-o://cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de" gracePeriod=30 Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.157037 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-log" containerID="cri-o://b3f1df723568629195e41d1a339702a8b6160ef2a42cf4d79e628be311effae8" gracePeriod=30 Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.157173 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-httpd" containerID="cri-o://e62a81af77cd423f4e6237643412e80763fa7bd06c07cfa789a98409e51917fe" gracePeriod=30 Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.176602 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.176584057 podStartE2EDuration="38.176584057s" podCreationTimestamp="2026-02-16 12:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:37.175041742 +0000 UTC m=+1202.768057076" watchObservedRunningTime="2026-02-16 12:51:37.176584057 +0000 UTC m=+1202.769599391" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.208426 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018a7587-e44d-4974-8e51-9241904ad7df" path="/var/lib/kubelet/pods/018a7587-e44d-4974-8e51-9241904ad7df/volumes" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.209600 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233e1940-0b00-4556-9a3c-c438d43a6816" path="/var/lib/kubelet/pods/233e1940-0b00-4556-9a3c-c438d43a6816/volumes" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.210238 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae9f8668-6aca-49c6-9386-9adab98879a7" path="/var/lib/kubelet/pods/ae9f8668-6aca-49c6-9386-9adab98879a7/volumes" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.210738 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c901ce2e-6b4a-464e-8679-72329a180956" path="/var/lib/kubelet/pods/c901ce2e-6b4a-464e-8679-72329a180956/volumes" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.213302 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f1a3af6-c025-4113-8967-3a8d48724ef9","Type":"ContainerStarted","Data":"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214372 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214407 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"300e5319-9412-411d-8c94-5fbe2b001d54","Type":"ContainerStarted","Data":"e62a81af77cd423f4e6237643412e80763fa7bd06c07cfa789a98409e51917fe"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64799464-xwrv9" event={"ID":"aa66dcb2-43c2-4824-80f8-30911a4a8c72","Type":"ContainerStarted","Data":"9eb3394740ac7589fd7c5b3501d8cd5afd22f1c09c0fe8b818f0983918ac2a7a"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214468 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9ef5643d-2fd2-478a-98bd-ed6217fa9b32","Type":"ContainerStarted","Data":"202bd145655cd8f66c450813168ac3e8f765db9d3da3e72ce71b15bb77a822b2"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214484 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" event={"ID":"51e33f8d-7dc3-4d9a-a6db-c005cae6f522","Type":"ContainerStarted","Data":"3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214529 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rczq6" event={"ID":"03cbd43b-bc5a-4954-aa6f-1cb9440076a9","Type":"ContainerStarted","Data":"b52f75425facafb9dc4b8fa9b64e8b925694f38305b240d8d0425e375afb915e"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.214548 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746fc7768-pc68r" event={"ID":"5357e09b-7a51-4687-be1c-99a473120c90","Type":"ContainerStarted","Data":"0c48f7120a195adfe4901873eed7a5533e82b99dbd648265975772d29a8c49a9"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.219722 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=38.219703386 podStartE2EDuration="38.219703386s" podCreationTimestamp="2026-02-16 12:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:37.198034913 +0000 UTC m=+1202.791050247" watchObservedRunningTime="2026-02-16 12:51:37.219703386 +0000 UTC m=+1202.812718720" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.219954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92cefdaf-4a4b-4771-9b15-0666298881e8","Type":"ContainerStarted","Data":"6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.226365 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgzm8" event={"ID":"2ea741e8-2ce9-47a5-a56f-c4ede0af0124","Type":"ContainerStarted","Data":"3c767225ae4adee9d73ebfc933753e712ecc84983b9677d6ba07004bbf8264db"} Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.236745 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rczq6" podStartSLOduration=4.032463259 podStartE2EDuration="38.236719415s" podCreationTimestamp="2026-02-16 12:50:59 +0000 UTC" firstStartedPulling="2026-02-16 12:51:01.367818302 +0000 UTC m=+1166.960833636" lastFinishedPulling="2026-02-16 12:51:35.572074438 +0000 UTC m=+1201.165089792" observedRunningTime="2026-02-16 12:51:37.224658398 +0000 UTC m=+1202.817673732" watchObservedRunningTime="2026-02-16 12:51:37.236719415 +0000 UTC m=+1202.829734749" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.260101 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" podStartSLOduration=38.260084137 podStartE2EDuration="38.260084137s" podCreationTimestamp="2026-02-16 12:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:37.245510438 +0000 UTC m=+1202.838525782" watchObservedRunningTime="2026-02-16 12:51:37.260084137 +0000 UTC m=+1202.853099471" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.290479 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.295676 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=4.058082355 podStartE2EDuration="39.295659899s" podCreationTimestamp="2026-02-16 12:50:58 +0000 UTC" firstStartedPulling="2026-02-16 12:51:01.148024168 +0000 UTC m=+1166.741039492" lastFinishedPulling="2026-02-16 12:51:36.385601702 +0000 UTC m=+1201.978617036" observedRunningTime="2026-02-16 12:51:37.262063614 +0000 UTC m=+1202.855078948" watchObservedRunningTime="2026-02-16 12:51:37.295659899 +0000 UTC m=+1202.888675233" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.303572 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.158098248 podStartE2EDuration="39.303560576s" podCreationTimestamp="2026-02-16 12:50:58 +0000 UTC" firstStartedPulling="2026-02-16 12:51:00.429303247 +0000 UTC m=+1166.022318581" lastFinishedPulling="2026-02-16 12:51:35.574765575 +0000 UTC m=+1201.167780909" observedRunningTime="2026-02-16 12:51:37.28036103 +0000 UTC m=+1202.873376364" watchObservedRunningTime="2026-02-16 12:51:37.303560576 +0000 UTC m=+1202.896575910" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.509740 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.589047 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsjgv\" (UniqueName: \"kubernetes.io/projected/407468d3-5baf-4bde-af39-679ed83889c8-kube-api-access-wsjgv\") pod \"407468d3-5baf-4bde-af39-679ed83889c8\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.589340 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-config\") pod \"407468d3-5baf-4bde-af39-679ed83889c8\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.589547 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-combined-ca-bundle\") pod \"407468d3-5baf-4bde-af39-679ed83889c8\" (UID: \"407468d3-5baf-4bde-af39-679ed83889c8\") " Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.594621 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407468d3-5baf-4bde-af39-679ed83889c8-kube-api-access-wsjgv" (OuterVolumeSpecName: "kube-api-access-wsjgv") pod "407468d3-5baf-4bde-af39-679ed83889c8" (UID: "407468d3-5baf-4bde-af39-679ed83889c8"). InnerVolumeSpecName "kube-api-access-wsjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.686890 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "407468d3-5baf-4bde-af39-679ed83889c8" (UID: "407468d3-5baf-4bde-af39-679ed83889c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.699692 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsjgv\" (UniqueName: \"kubernetes.io/projected/407468d3-5baf-4bde-af39-679ed83889c8-kube-api-access-wsjgv\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.699742 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.760325 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-config" (OuterVolumeSpecName: "config") pod "407468d3-5baf-4bde-af39-679ed83889c8" (UID: "407468d3-5baf-4bde-af39-679ed83889c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.801449 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/407468d3-5baf-4bde-af39-679ed83889c8-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:37 crc kubenswrapper[4799]: I0216 12:51:37.882486 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.004586 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-logs\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.004787 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-config-data\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.004843 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-internal-tls-certs\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.005039 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.005079 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-httpd-run\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.005139 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-combined-ca-bundle\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.005204 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hp8\" (UniqueName: \"kubernetes.io/projected/3f1a3af6-c025-4113-8967-3a8d48724ef9-kube-api-access-b9hp8\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.005260 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-scripts\") pod \"3f1a3af6-c025-4113-8967-3a8d48724ef9\" (UID: \"3f1a3af6-c025-4113-8967-3a8d48724ef9\") " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.006384 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-logs" (OuterVolumeSpecName: "logs") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.006600 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.017856 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-scripts" (OuterVolumeSpecName: "scripts") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.017881 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1a3af6-c025-4113-8967-3a8d48724ef9-kube-api-access-b9hp8" (OuterVolumeSpecName: "kube-api-access-b9hp8") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "kube-api-access-b9hp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.017933 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.080267 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.089233 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-config-data" (OuterVolumeSpecName: "config-data") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110628 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110660 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110672 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110681 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9hp8\" (UniqueName: \"kubernetes.io/projected/3f1a3af6-c025-4113-8967-3a8d48724ef9-kube-api-access-b9hp8\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110690 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110700 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f1a3af6-c025-4113-8967-3a8d48724ef9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.110709 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.135638 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.150285 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3f1a3af6-c025-4113-8967-3a8d48724ef9" (UID: "3f1a3af6-c025-4113-8967-3a8d48724ef9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.212402 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f1a3af6-c025-4113-8967-3a8d48724ef9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.212439 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.254918 4799 generic.go:334] "Generic (PLEG): container finished" podID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerID="cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de" exitCode=143 Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.254948 4799 generic.go:334] "Generic (PLEG): container finished" podID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerID="f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030" exitCode=143 Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.255051 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.255879 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f1a3af6-c025-4113-8967-3a8d48724ef9","Type":"ContainerDied","Data":"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.255949 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f1a3af6-c025-4113-8967-3a8d48724ef9","Type":"ContainerDied","Data":"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.255965 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f1a3af6-c025-4113-8967-3a8d48724ef9","Type":"ContainerDied","Data":"4eaa99915f8f6742666b9c6836082aa62dacd2766742fd4d833700380feb7a46"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.255986 4799 scope.go:117] "RemoveContainer" containerID="cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.270874 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b5153cc1-228a-4731-adc9-dbdde3ae1661","Type":"ContainerStarted","Data":"56fc74aa295ab855de6e18f8ece981c0446d6ea4735b98e283676eade66da9b4"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.272243 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b5153cc1-228a-4731-adc9-dbdde3ae1661","Type":"ContainerStarted","Data":"0bb9fc7cc62ce198e859a53c73fd3cd63a255007e5b8dd27820c6ba9249bbfed"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.272268 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b5153cc1-228a-4731-adc9-dbdde3ae1661","Type":"ContainerStarted","Data":"a62844b131fc29879a8eef291394f7f02f24e121a3f85fefe61182edc5644bae"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.274203 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.276334 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.174:9322/\": dial tcp 10.217.0.174:9322: connect: connection refused" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.287463 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgzm8" event={"ID":"2ea741e8-2ce9-47a5-a56f-c4ede0af0124","Type":"ContainerStarted","Data":"2cbc5e9ccb2b67c6a42b08a6f389487791bf15e7cebafcfbed12fa66596e62d7"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.304186 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.304163618 podStartE2EDuration="2.304163618s" podCreationTimestamp="2026-02-16 12:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:38.303594961 +0000 UTC m=+1203.896610305" watchObservedRunningTime="2026-02-16 12:51:38.304163618 +0000 UTC m=+1203.897178952" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.310251 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746fc7768-pc68r" event={"ID":"5357e09b-7a51-4687-be1c-99a473120c90","Type":"ContainerStarted","Data":"9e98b1e0776c21e798c8ec0399674b680e3006908cb7040c91104594062fa43f"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.310294 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746fc7768-pc68r" event={"ID":"5357e09b-7a51-4687-be1c-99a473120c90","Type":"ContainerStarted","Data":"6a7d9541f9ee6c4936a4ca92c8e7cbe7f3befe853e369e78b7a6a37ba1b1f36a"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.330446 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j8vxl" event={"ID":"407468d3-5baf-4bde-af39-679ed83889c8","Type":"ContainerDied","Data":"76cd0f088b32ddb9c57a86f36b69c59c78befaf626e2c64f49e1356433499546"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.330489 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76cd0f088b32ddb9c57a86f36b69c59c78befaf626e2c64f49e1356433499546" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.330526 4799 scope.go:117] "RemoveContainer" containerID="f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.330661 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j8vxl" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.348256 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bgzm8" podStartSLOduration=4.348238755 podStartE2EDuration="4.348238755s" podCreationTimestamp="2026-02-16 12:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:38.327591101 +0000 UTC m=+1203.920606465" watchObservedRunningTime="2026-02-16 12:51:38.348238755 +0000 UTC m=+1203.941254089" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.355746 4799 generic.go:334] "Generic (PLEG): container finished" podID="300e5319-9412-411d-8c94-5fbe2b001d54" containerID="e62a81af77cd423f4e6237643412e80763fa7bd06c07cfa789a98409e51917fe" exitCode=143 Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.355775 4799 generic.go:334] "Generic (PLEG): container finished" podID="300e5319-9412-411d-8c94-5fbe2b001d54" containerID="b3f1df723568629195e41d1a339702a8b6160ef2a42cf4d79e628be311effae8" exitCode=143 Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.355844 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"300e5319-9412-411d-8c94-5fbe2b001d54","Type":"ContainerDied","Data":"e62a81af77cd423f4e6237643412e80763fa7bd06c07cfa789a98409e51917fe"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.355872 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"300e5319-9412-411d-8c94-5fbe2b001d54","Type":"ContainerDied","Data":"b3f1df723568629195e41d1a339702a8b6160ef2a42cf4d79e628be311effae8"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.382204 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.403413 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64799464-xwrv9" event={"ID":"aa66dcb2-43c2-4824-80f8-30911a4a8c72","Type":"ContainerStarted","Data":"f5e7f0cad6304deb9ce0fa9ecfe24f7a00e97c436a8184aa58fdb9680802085c"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.403472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64799464-xwrv9" event={"ID":"aa66dcb2-43c2-4824-80f8-30911a4a8c72","Type":"ContainerStarted","Data":"5860ae1d98681f778b286a5145aadbb547e51aa107d4f380ed929ebce3939504"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.407375 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.423525 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e71f22a-250c-48e2-8309-7dfeb1325a2b","Type":"ContainerStarted","Data":"cd5deb5fd3db077a1a851740fa75368f52abf00dac239f25d0939245a9dec90c"} Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.435660 4799 scope.go:117] "RemoveContainer" containerID="cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.435855 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:38 crc kubenswrapper[4799]: E0216 12:51:38.436301 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-httpd" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.436316 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-httpd" Feb 16 12:51:38 crc kubenswrapper[4799]: E0216 12:51:38.436388 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407468d3-5baf-4bde-af39-679ed83889c8" containerName="neutron-db-sync" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.436398 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="407468d3-5baf-4bde-af39-679ed83889c8" containerName="neutron-db-sync" Feb 16 12:51:38 crc kubenswrapper[4799]: E0216 12:51:38.436415 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-log" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.436423 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-log" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.436641 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-log" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.436656 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="407468d3-5baf-4bde-af39-679ed83889c8" containerName="neutron-db-sync" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.436689 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" containerName="glance-httpd" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.437846 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: E0216 12:51:38.438183 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de\": container with ID starting with cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de not found: ID does not exist" containerID="cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.438225 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de"} err="failed to get container status \"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de\": rpc error: code = NotFound desc = could not find container \"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de\": container with ID starting with cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de not found: ID does not exist" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.438252 4799 scope.go:117] "RemoveContainer" containerID="f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030" Feb 16 12:51:38 crc kubenswrapper[4799]: E0216 12:51:38.438572 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030\": container with ID starting with f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030 not found: ID does not exist" containerID="f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.438594 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030"} err="failed to get container status \"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030\": rpc error: code = NotFound desc = could not find container \"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030\": container with ID starting with f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030 not found: ID does not exist" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.438606 4799 scope.go:117] "RemoveContainer" containerID="cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.446517 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de"} err="failed to get container status \"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de\": rpc error: code = NotFound desc = could not find container \"cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de\": container with ID starting with cdadae17300bf9e1e600dc7c6f8b1750991256408f46eaa7544559b3fc7993de not found: ID does not exist" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.446584 4799 scope.go:117] "RemoveContainer" containerID="f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.446856 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.446945 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.447493 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6746fc7768-pc68r" podStartSLOduration=31.036352101 podStartE2EDuration="31.447477887s" podCreationTimestamp="2026-02-16 12:51:07 +0000 UTC" firstStartedPulling="2026-02-16 12:51:36.381491864 +0000 UTC m=+1201.974507198" lastFinishedPulling="2026-02-16 12:51:36.79261765 +0000 UTC m=+1202.385632984" observedRunningTime="2026-02-16 12:51:38.403555525 +0000 UTC m=+1203.996570869" watchObservedRunningTime="2026-02-16 12:51:38.447477887 +0000 UTC m=+1204.040493221" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.458032 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030"} err="failed to get container status \"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030\": rpc error: code = NotFound desc = could not find container \"f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030\": container with ID starting with f7db7e3948c2253210e5027f1a016f1383688a8549e2903e3fd4f2cdcbeb9030 not found: ID does not exist" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.487320 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.503589 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b64799464-xwrv9" podStartSLOduration=31.121188929 podStartE2EDuration="31.503568929s" podCreationTimestamp="2026-02-16 12:51:07 +0000 UTC" firstStartedPulling="2026-02-16 12:51:36.340530186 +0000 UTC m=+1201.933545520" lastFinishedPulling="2026-02-16 12:51:36.722910186 +0000 UTC m=+1202.315925520" observedRunningTime="2026-02-16 12:51:38.451156123 +0000 UTC m=+1204.044171457" watchObservedRunningTime="2026-02-16 12:51:38.503568929 +0000 UTC m=+1204.096584263" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.538896 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.538973 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.539107 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.552988 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.555402 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.555788 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h6cw\" (UniqueName: \"kubernetes.io/projected/4e8b5246-e2d5-4349-aa8c-d58091276c4b-kube-api-access-9h6cw\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.555986 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.556091 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.618406 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c88d8b85b-zrggw"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.648209 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.659698 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.660017 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mdlfb" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.660188 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.660363 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.661377 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668f78969f-gvgfh"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677678 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677733 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677767 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677783 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677808 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677887 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h6cw\" (UniqueName: \"kubernetes.io/projected/4e8b5246-e2d5-4349-aa8c-d58091276c4b-kube-api-access-9h6cw\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677930 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.677958 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.689967 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.698174 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.698497 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.720882 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c88d8b85b-zrggw"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.727040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h6cw\" (UniqueName: \"kubernetes.io/projected/4e8b5246-e2d5-4349-aa8c-d58091276c4b-kube-api-access-9h6cw\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.734346 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.743026 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.743660 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.743704 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68c799447-vnxkx"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.745712 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.745906 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.777030 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.782684 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdq8\" (UniqueName: \"kubernetes.io/projected/02e06f59-2164-4486-9138-2819bf6dcf26-kube-api-access-gtdq8\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.782760 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-ovndb-tls-certs\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.782790 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-config\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.782845 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-httpd-config\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.782977 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-combined-ca-bundle\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.793473 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c799447-vnxkx"] Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.807143 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887151 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-swift-storage-0\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887203 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-combined-ca-bundle\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887227 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-sb\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887243 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-nb\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-config\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887298 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtdq8\" (UniqueName: \"kubernetes.io/projected/02e06f59-2164-4486-9138-2819bf6dcf26-kube-api-access-gtdq8\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887313 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-svc\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887341 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-ovndb-tls-certs\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887362 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-config\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887404 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-httpd-config\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.887428 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2lr\" (UniqueName: \"kubernetes.io/projected/f2be7512-5841-4f22-bb5a-92c1f2beeceb-kube-api-access-nk2lr\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.899413 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-ovndb-tls-certs\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.914816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtdq8\" (UniqueName: \"kubernetes.io/projected/02e06f59-2164-4486-9138-2819bf6dcf26-kube-api-access-gtdq8\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.916897 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-combined-ca-bundle\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.922374 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-config\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.924777 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-httpd-config\") pod \"neutron-6c88d8b85b-zrggw\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.986106 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.990366 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk2lr\" (UniqueName: \"kubernetes.io/projected/f2be7512-5841-4f22-bb5a-92c1f2beeceb-kube-api-access-nk2lr\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.990485 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-swift-storage-0\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.990510 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-sb\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.990526 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-nb\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.990560 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-config\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.990582 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-svc\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.991614 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-sb\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.991710 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-svc\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.995607 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-swift-storage-0\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.995780 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-nb\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:38 crc kubenswrapper[4799]: I0216 12:51:38.995835 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-config\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.013423 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.026737 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk2lr\" (UniqueName: \"kubernetes.io/projected/f2be7512-5841-4f22-bb5a-92c1f2beeceb-kube-api-access-nk2lr\") pod \"dnsmasq-dns-68c799447-vnxkx\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.093795 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znr5b\" (UniqueName: \"kubernetes.io/projected/300e5319-9412-411d-8c94-5fbe2b001d54-kube-api-access-znr5b\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.093937 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-httpd-run\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.093996 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-scripts\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.094081 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-public-tls-certs\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.094108 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-logs\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.094141 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-config-data\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.094215 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.094256 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-combined-ca-bundle\") pod \"300e5319-9412-411d-8c94-5fbe2b001d54\" (UID: \"300e5319-9412-411d-8c94-5fbe2b001d54\") " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.102639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.110395 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-logs" (OuterVolumeSpecName: "logs") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.130278 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300e5319-9412-411d-8c94-5fbe2b001d54-kube-api-access-znr5b" (OuterVolumeSpecName: "kube-api-access-znr5b") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "kube-api-access-znr5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.132471 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.134472 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-scripts" (OuterVolumeSpecName: "scripts") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.161473 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.173372 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1a3af6-c025-4113-8967-3a8d48724ef9" path="/var/lib/kubelet/pods/3f1a3af6-c025-4113-8967-3a8d48724ef9/volumes" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.205941 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.205971 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znr5b\" (UniqueName: \"kubernetes.io/projected/300e5319-9412-411d-8c94-5fbe2b001d54-kube-api-access-znr5b\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.205984 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.205994 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.206002 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/300e5319-9412-411d-8c94-5fbe2b001d54-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.206022 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.243711 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.263785 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-config-data" (OuterVolumeSpecName: "config-data") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.264816 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.264884 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.294567 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "300e5319-9412-411d-8c94-5fbe2b001d54" (UID: "300e5319-9412-411d-8c94-5fbe2b001d54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.309793 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.310844 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.310891 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300e5319-9412-411d-8c94-5fbe2b001d54-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.310904 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.314580 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.342148 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.392550 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.394666 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c901ce2e-6b4a-464e-8679-72329a180956" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.159:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.472738 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"300e5319-9412-411d-8c94-5fbe2b001d54","Type":"ContainerDied","Data":"a1215a8b11d1a4797d34073985526188a04c4107740147ecf59ffe29b394f09f"} Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.473652 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.473722 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerName="dnsmasq-dns" containerID="cri-o://3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0" gracePeriod=10 Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.477100 4799 scope.go:117] "RemoveContainer" containerID="e62a81af77cd423f4e6237643412e80763fa7bd06c07cfa789a98409e51917fe" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.491968 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.538113 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.573350 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.577269 4799 scope.go:117] "RemoveContainer" containerID="b3f1df723568629195e41d1a339702a8b6160ef2a42cf4d79e628be311effae8" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.577648 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.678683 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.700507 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:39 crc kubenswrapper[4799]: E0216 12:51:39.700908 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-httpd" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.700920 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-httpd" Feb 16 12:51:39 crc kubenswrapper[4799]: E0216 12:51:39.700971 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-log" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.700978 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-log" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.701169 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-httpd" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.701195 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" containerName="glance-log" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.702322 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.712713 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.712984 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.714623 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.726659 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.746828 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839519 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839595 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839688 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839746 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839824 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839889 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ddz\" (UniqueName: \"kubernetes.io/projected/94da3e05-8956-4ab9-b272-46b6afcf14d3-kube-api-access-89ddz\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839910 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-logs\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.839978 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943624 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943693 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943760 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943792 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943854 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943890 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ddz\" (UniqueName: \"kubernetes.io/projected/94da3e05-8956-4ab9-b272-46b6afcf14d3-kube-api-access-89ddz\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943920 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-logs\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.943969 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.944831 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.947511 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.948901 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-logs\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.962196 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.964672 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.967138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.973512 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:39 crc kubenswrapper[4799]: I0216 12:51:39.986843 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.040565 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ddz\" (UniqueName: \"kubernetes.io/projected/94da3e05-8956-4ab9-b272-46b6afcf14d3-kube-api-access-89ddz\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:40 crc kubenswrapper[4799]: W0216 12:51:40.063745 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e8b5246_e2d5_4349_aa8c_d58091276c4b.slice/crio-4b959b9c56b48bc24701eae29365abdda20bc300fb61e2ed4f2b614830e3896e WatchSource:0}: Error finding container 4b959b9c56b48bc24701eae29365abdda20bc300fb61e2ed4f2b614830e3896e: Status 404 returned error can't find the container with id 4b959b9c56b48bc24701eae29365abdda20bc300fb61e2ed4f2b614830e3896e Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.136773 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " pod="openstack/glance-default-external-api-0" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.196540 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c88d8b85b-zrggw"] Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.339096 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c799447-vnxkx"] Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.356941 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.460094 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.572277 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbkbn\" (UniqueName: \"kubernetes.io/projected/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-kube-api-access-pbkbn\") pod \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.572562 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-sb\") pod \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.572597 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-svc\") pod \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.572658 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-nb\") pod \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.572683 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-config\") pod \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.572721 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-swift-storage-0\") pod \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\" (UID: \"51e33f8d-7dc3-4d9a-a6db-c005cae6f522\") " Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.593460 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-kube-api-access-pbkbn" (OuterVolumeSpecName: "kube-api-access-pbkbn") pod "51e33f8d-7dc3-4d9a-a6db-c005cae6f522" (UID: "51e33f8d-7dc3-4d9a-a6db-c005cae6f522"). InnerVolumeSpecName "kube-api-access-pbkbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.646492 4799 generic.go:334] "Generic (PLEG): container finished" podID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerID="3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0" exitCode=0 Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.646588 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" event={"ID":"51e33f8d-7dc3-4d9a-a6db-c005cae6f522","Type":"ContainerDied","Data":"3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0"} Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.646617 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" event={"ID":"51e33f8d-7dc3-4d9a-a6db-c005cae6f522","Type":"ContainerDied","Data":"536ebb56e1a77c06b44b167427522abc1642ea92f0fc79600028e6126b4e011f"} Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.646636 4799 scope.go:117] "RemoveContainer" containerID="3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.646770 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668f78969f-gvgfh" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.663730 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e8b5246-e2d5-4349-aa8c-d58091276c4b","Type":"ContainerStarted","Data":"4b959b9c56b48bc24701eae29365abdda20bc300fb61e2ed4f2b614830e3896e"} Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.672675 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51e33f8d-7dc3-4d9a-a6db-c005cae6f522" (UID: "51e33f8d-7dc3-4d9a-a6db-c005cae6f522"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.675705 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.675731 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbkbn\" (UniqueName: \"kubernetes.io/projected/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-kube-api-access-pbkbn\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.675879 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c799447-vnxkx" event={"ID":"f2be7512-5841-4f22-bb5a-92c1f2beeceb","Type":"ContainerStarted","Data":"16ef46d141321b97d7814fee56b63f122372e4065a6a56229fd54339b4d49961"} Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.709409 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c88d8b85b-zrggw" event={"ID":"02e06f59-2164-4486-9138-2819bf6dcf26","Type":"ContainerStarted","Data":"c831e48bb25eac7be26eecad13c174de30305fa594fceb5a362b8bc9056fbb52"} Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.723882 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51e33f8d-7dc3-4d9a-a6db-c005cae6f522" (UID: "51e33f8d-7dc3-4d9a-a6db-c005cae6f522"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.744652 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51e33f8d-7dc3-4d9a-a6db-c005cae6f522" (UID: "51e33f8d-7dc3-4d9a-a6db-c005cae6f522"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.756932 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-config" (OuterVolumeSpecName: "config") pod "51e33f8d-7dc3-4d9a-a6db-c005cae6f522" (UID: "51e33f8d-7dc3-4d9a-a6db-c005cae6f522"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.770597 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51e33f8d-7dc3-4d9a-a6db-c005cae6f522" (UID: "51e33f8d-7dc3-4d9a-a6db-c005cae6f522"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.777992 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.778031 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.778040 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.778049 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e33f8d-7dc3-4d9a-a6db-c005cae6f522-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.902288 4799 scope.go:117] "RemoveContainer" containerID="7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.974079 4799 scope.go:117] "RemoveContainer" containerID="3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0" Feb 16 12:51:40 crc kubenswrapper[4799]: E0216 12:51:40.974859 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0\": container with ID starting with 3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0 not found: ID does not exist" containerID="3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.974916 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0"} err="failed to get container status \"3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0\": rpc error: code = NotFound desc = could not find container \"3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0\": container with ID starting with 3d05b68eea71bf5d5bea389d3bb4d41e597fde4124ae4bddf31a0f43190e24f0 not found: ID does not exist" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.974940 4799 scope.go:117] "RemoveContainer" containerID="7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7" Feb 16 12:51:40 crc kubenswrapper[4799]: E0216 12:51:40.976987 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7\": container with ID starting with 7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7 not found: ID does not exist" containerID="7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7" Feb 16 12:51:40 crc kubenswrapper[4799]: I0216 12:51:40.977015 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7"} err="failed to get container status \"7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7\": rpc error: code = NotFound desc = could not find container \"7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7\": container with ID starting with 7f8f01f886e1a7559841106d6a27140ae217b4295c5f2af5ce3166abd32f90f7 not found: ID does not exist" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.097023 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668f78969f-gvgfh"] Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.108455 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668f78969f-gvgfh"] Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.199578 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300e5319-9412-411d-8c94-5fbe2b001d54" path="/var/lib/kubelet/pods/300e5319-9412-411d-8c94-5fbe2b001d54/volumes" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.200613 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" path="/var/lib/kubelet/pods/51e33f8d-7dc3-4d9a-a6db-c005cae6f522/volumes" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.276942 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:51:41 crc kubenswrapper[4799]: W0216 12:51:41.293243 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94da3e05_8956_4ab9_b272_46b6afcf14d3.slice/crio-b8fd6cb63c424b333b4de8e09b40e3e7baa8d90333cee6e8439235e989e50774 WatchSource:0}: Error finding container b8fd6cb63c424b333b4de8e09b40e3e7baa8d90333cee6e8439235e989e50774: Status 404 returned error can't find the container with id b8fd6cb63c424b333b4de8e09b40e3e7baa8d90333cee6e8439235e989e50774 Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.656053 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.753413 4799 generic.go:334] "Generic (PLEG): container finished" podID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerID="77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b" exitCode=0 Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.754493 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c799447-vnxkx" event={"ID":"f2be7512-5841-4f22-bb5a-92c1f2beeceb","Type":"ContainerDied","Data":"77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b"} Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.763347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94da3e05-8956-4ab9-b272-46b6afcf14d3","Type":"ContainerStarted","Data":"b8fd6cb63c424b333b4de8e09b40e3e7baa8d90333cee6e8439235e989e50774"} Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.776750 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c88d8b85b-zrggw" event={"ID":"02e06f59-2164-4486-9138-2819bf6dcf26","Type":"ContainerStarted","Data":"ab3d851bd648412916a9a4a939adaeb99fa7d4f4478a2a335f301c014dacb378"} Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.794407 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" containerID="cri-o://6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" gracePeriod=30 Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.794752 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e8b5246-e2d5-4349-aa8c-d58091276c4b","Type":"ContainerStarted","Data":"25ddfa785289c44caca61f786064db6c542d2dd00358bf5705b0c8b0c69f1f32"} Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.794902 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" containerName="watcher-decision-engine" containerID="cri-o://202bd145655cd8f66c450813168ac3e8f765db9d3da3e72ce71b15bb77a822b2" gracePeriod=30 Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.908600 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-94c85d75f-kbj7j"] Feb 16 12:51:41 crc kubenswrapper[4799]: E0216 12:51:41.909076 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerName="init" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.909089 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerName="init" Feb 16 12:51:41 crc kubenswrapper[4799]: E0216 12:51:41.909107 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerName="dnsmasq-dns" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.909113 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerName="dnsmasq-dns" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.909341 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e33f8d-7dc3-4d9a-a6db-c005cae6f522" containerName="dnsmasq-dns" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.910413 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.917149 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.917357 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 12:51:41 crc kubenswrapper[4799]: I0216 12:51:41.923582 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-94c85d75f-kbj7j"] Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.013877 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-internal-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.013972 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-public-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.014040 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-config\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.014099 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-httpd-config\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.014117 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-ovndb-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.014195 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rznvk\" (UniqueName: \"kubernetes.io/projected/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-kube-api-access-rznvk\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.014215 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-combined-ca-bundle\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116275 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rznvk\" (UniqueName: \"kubernetes.io/projected/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-kube-api-access-rznvk\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116314 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-combined-ca-bundle\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116360 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-internal-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-public-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116439 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-config\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116478 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-httpd-config\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.116493 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-ovndb-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.161258 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-httpd-config\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.161413 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-ovndb-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.161828 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-internal-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.162393 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-combined-ca-bundle\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.162872 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rznvk\" (UniqueName: \"kubernetes.io/projected/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-kube-api-access-rznvk\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.172951 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-config\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.182753 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-public-tls-certs\") pod \"neutron-94c85d75f-kbj7j\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.254600 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.819735 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c88d8b85b-zrggw" event={"ID":"02e06f59-2164-4486-9138-2819bf6dcf26","Type":"ContainerStarted","Data":"25aa58840310b30ac173168ef70859089686d547aa93dddc55acca132b627fdd"} Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.819971 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:51:42 crc kubenswrapper[4799]: I0216 12:51:42.856099 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c88d8b85b-zrggw" podStartSLOduration=4.856073267 podStartE2EDuration="4.856073267s" podCreationTimestamp="2026-02-16 12:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:42.837067821 +0000 UTC m=+1208.430083145" watchObservedRunningTime="2026-02-16 12:51:42.856073267 +0000 UTC m=+1208.449088601" Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.026373 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-94c85d75f-kbj7j"] Feb 16 12:51:43 crc kubenswrapper[4799]: W0216 12:51:43.042726 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda74ff520_0a2d_4853_9070_fdf3f2aa7a47.slice/crio-32998adcda55817e532e4f80801fb8947fc6bf0328556c3d84a0bb3be7167f04 WatchSource:0}: Error finding container 32998adcda55817e532e4f80801fb8947fc6bf0328556c3d84a0bb3be7167f04: Status 404 returned error can't find the container with id 32998adcda55817e532e4f80801fb8947fc6bf0328556c3d84a0bb3be7167f04 Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.679262 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.859486 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94da3e05-8956-4ab9-b272-46b6afcf14d3","Type":"ContainerStarted","Data":"f4d2c422762feab79fce7666498ee40316aba4ac7c11a6a06d7feeb182c32f42"} Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.866383 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e8b5246-e2d5-4349-aa8c-d58091276c4b","Type":"ContainerStarted","Data":"cfa9d73604d9b0f5a0fd775c76ce5af06e86b6ad4bf12c502df9baf9e3f07850"} Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.873212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94c85d75f-kbj7j" event={"ID":"a74ff520-0a2d-4853-9070-fdf3f2aa7a47","Type":"ContainerStarted","Data":"33358521838a5cd4b967f887308e41468e348361bf89b876e838e9d59c5150a8"} Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.873257 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94c85d75f-kbj7j" event={"ID":"a74ff520-0a2d-4853-9070-fdf3f2aa7a47","Type":"ContainerStarted","Data":"32998adcda55817e532e4f80801fb8947fc6bf0328556c3d84a0bb3be7167f04"} Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.876376 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c799447-vnxkx" event={"ID":"f2be7512-5841-4f22-bb5a-92c1f2beeceb","Type":"ContainerStarted","Data":"f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592"} Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.901031 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.901012442 podStartE2EDuration="5.901012442s" podCreationTimestamp="2026-02-16 12:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:43.891751435 +0000 UTC m=+1209.484766759" watchObservedRunningTime="2026-02-16 12:51:43.901012442 +0000 UTC m=+1209.494027776" Feb 16 12:51:43 crc kubenswrapper[4799]: I0216 12:51:43.923594 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68c799447-vnxkx" podStartSLOduration=5.92356911 podStartE2EDuration="5.92356911s" podCreationTimestamp="2026-02-16 12:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:43.916936459 +0000 UTC m=+1209.509951793" watchObservedRunningTime="2026-02-16 12:51:43.92356911 +0000 UTC m=+1209.516584444" Feb 16 12:51:44 crc kubenswrapper[4799]: E0216 12:51:44.271945 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:44 crc kubenswrapper[4799]: E0216 12:51:44.273668 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:44 crc kubenswrapper[4799]: E0216 12:51:44.277009 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:44 crc kubenswrapper[4799]: E0216 12:51:44.277083 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:51:44 crc kubenswrapper[4799]: I0216 12:51:44.315673 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:46 crc kubenswrapper[4799]: I0216 12:51:46.655951 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 16 12:51:46 crc kubenswrapper[4799]: I0216 12:51:46.660580 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 16 12:51:46 crc kubenswrapper[4799]: I0216 12:51:46.930620 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 12:51:47 crc kubenswrapper[4799]: I0216 12:51:47.922239 4799 generic.go:334] "Generic (PLEG): container finished" podID="2ea741e8-2ce9-47a5-a56f-c4ede0af0124" containerID="2cbc5e9ccb2b67c6a42b08a6f389487791bf15e7cebafcfbed12fa66596e62d7" exitCode=0 Feb 16 12:51:47 crc kubenswrapper[4799]: I0216 12:51:47.922641 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgzm8" event={"ID":"2ea741e8-2ce9-47a5-a56f-c4ede0af0124","Type":"ContainerDied","Data":"2cbc5e9ccb2b67c6a42b08a6f389487791bf15e7cebafcfbed12fa66596e62d7"} Feb 16 12:51:47 crc kubenswrapper[4799]: I0216 12:51:47.927074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94da3e05-8956-4ab9-b272-46b6afcf14d3","Type":"ContainerStarted","Data":"bb7dd9ef23c9a03af1905e08d3a8471c8d1354dbdf48d6ce0fd51be004dc695e"} Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:47.999679 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.999656782 podStartE2EDuration="8.999656782s" podCreationTimestamp="2026-02-16 12:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:47.99088404 +0000 UTC m=+1213.583899374" watchObservedRunningTime="2026-02-16 12:51:47.999656782 +0000 UTC m=+1213.592672116" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.023356 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.023878 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.025696 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.171:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.171:8443: connect: connection refused" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.080698 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.081451 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.083692 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b64799464-xwrv9" podUID="aa66dcb2-43c2-4824-80f8-30911a4a8c72" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.172:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.172:8443: connect: connection refused" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.808732 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.809090 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.844513 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.855782 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.975509 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:48 crc kubenswrapper[4799]: I0216 12:51:48.976472 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:49 crc kubenswrapper[4799]: E0216 12:51:49.267677 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:49 crc kubenswrapper[4799]: E0216 12:51:49.272598 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:49 crc kubenswrapper[4799]: E0216 12:51:49.275030 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:49 crc kubenswrapper[4799]: E0216 12:51:49.275118 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.317806 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.404598 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87bb67d67-4q44z"] Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.404897 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="dnsmasq-dns" containerID="cri-o://4efd09e53f7a8965112f9e61bdb188cf985320500ec87edfba54ba70c12e0155" gracePeriod=10 Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.987533 4799 generic.go:334] "Generic (PLEG): container finished" podID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerID="4efd09e53f7a8965112f9e61bdb188cf985320500ec87edfba54ba70c12e0155" exitCode=0 Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.987610 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" event={"ID":"c52c4130-5b91-4e36-ad36-8333675ee0a4","Type":"ContainerDied","Data":"4efd09e53f7a8965112f9e61bdb188cf985320500ec87edfba54ba70c12e0155"} Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.989113 4799 generic.go:334] "Generic (PLEG): container finished" podID="03cbd43b-bc5a-4954-aa6f-1cb9440076a9" containerID="b52f75425facafb9dc4b8fa9b64e8b925694f38305b240d8d0425e375afb915e" exitCode=0 Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.989185 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rczq6" event={"ID":"03cbd43b-bc5a-4954-aa6f-1cb9440076a9","Type":"ContainerDied","Data":"b52f75425facafb9dc4b8fa9b64e8b925694f38305b240d8d0425e375afb915e"} Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.991907 4799 generic.go:334] "Generic (PLEG): container finished" podID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" containerID="202bd145655cd8f66c450813168ac3e8f765db9d3da3e72ce71b15bb77a822b2" exitCode=1 Feb 16 12:51:49 crc kubenswrapper[4799]: I0216 12:51:49.992239 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9ef5643d-2fd2-478a-98bd-ed6217fa9b32","Type":"ContainerDied","Data":"202bd145655cd8f66c450813168ac3e8f765db9d3da3e72ce71b15bb77a822b2"} Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.357313 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.357858 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.357959 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.358443 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api-log" containerID="cri-o://0bb9fc7cc62ce198e859a53c73fd3cd63a255007e5b8dd27820c6ba9249bbfed" gracePeriod=30 Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.358636 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api" containerID="cri-o://56fc74aa295ab855de6e18f8ece981c0446d6ea4735b98e283676eade66da9b4" gracePeriod=30 Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.413162 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 12:51:50 crc kubenswrapper[4799]: I0216 12:51:50.414112 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.002525 4799 generic.go:334] "Generic (PLEG): container finished" podID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerID="0bb9fc7cc62ce198e859a53c73fd3cd63a255007e5b8dd27820c6ba9249bbfed" exitCode=143 Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.002619 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.002629 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.002656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b5153cc1-228a-4731-adc9-dbdde3ae1661","Type":"ContainerDied","Data":"0bb9fc7cc62ce198e859a53c73fd3cd63a255007e5b8dd27820c6ba9249bbfed"} Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.003159 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.003189 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.160658 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.793479 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:51:51 crc kubenswrapper[4799]: I0216 12:51:51.793566 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:51:52 crc kubenswrapper[4799]: I0216 12:51:52.028959 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.174:9322/\": read tcp 10.217.0.2:52398->10.217.0.174:9322: read: connection reset by peer" Feb 16 12:51:52 crc kubenswrapper[4799]: I0216 12:51:52.029626 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9322/\": read tcp 10.217.0.2:52414->10.217.0.174:9322: read: connection reset by peer" Feb 16 12:51:52 crc kubenswrapper[4799]: I0216 12:51:52.395390 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:52 crc kubenswrapper[4799]: I0216 12:51:52.395528 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:51:52 crc kubenswrapper[4799]: I0216 12:51:52.400049 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 12:51:53 crc kubenswrapper[4799]: I0216 12:51:53.024922 4799 generic.go:334] "Generic (PLEG): container finished" podID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerID="56fc74aa295ab855de6e18f8ece981c0446d6ea4735b98e283676eade66da9b4" exitCode=0 Feb 16 12:51:53 crc kubenswrapper[4799]: I0216 12:51:53.025271 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b5153cc1-228a-4731-adc9-dbdde3ae1661","Type":"ContainerDied","Data":"56fc74aa295ab855de6e18f8ece981c0446d6ea4735b98e283676eade66da9b4"} Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.195914 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.197541 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 12:51:54 crc kubenswrapper[4799]: E0216 12:51:54.288551 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:54 crc kubenswrapper[4799]: E0216 12:51:54.305524 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:54 crc kubenswrapper[4799]: E0216 12:51:54.333274 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:54 crc kubenswrapper[4799]: E0216 12:51:54.333351 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.425188 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.527390 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.554691 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rczq6" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.555107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-credential-keys\") pod \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.555167 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmzlp\" (UniqueName: \"kubernetes.io/projected/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-kube-api-access-vmzlp\") pod \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.555189 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-config-data\") pod \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.555314 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-combined-ca-bundle\") pod \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.555347 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-fernet-keys\") pod \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.555432 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-scripts\") pod \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\" (UID: \"2ea741e8-2ce9-47a5-a56f-c4ede0af0124\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.580374 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2ea741e8-2ce9-47a5-a56f-c4ede0af0124" (UID: "2ea741e8-2ce9-47a5-a56f-c4ede0af0124"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.608337 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2ea741e8-2ce9-47a5-a56f-c4ede0af0124" (UID: "2ea741e8-2ce9-47a5-a56f-c4ede0af0124"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.608451 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-scripts" (OuterVolumeSpecName: "scripts") pod "2ea741e8-2ce9-47a5-a56f-c4ede0af0124" (UID: "2ea741e8-2ce9-47a5-a56f-c4ede0af0124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.628145 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-kube-api-access-vmzlp" (OuterVolumeSpecName: "kube-api-access-vmzlp") pod "2ea741e8-2ce9-47a5-a56f-c4ede0af0124" (UID: "2ea741e8-2ce9-47a5-a56f-c4ede0af0124"). InnerVolumeSpecName "kube-api-access-vmzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658537 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-combined-ca-bundle\") pod \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658622 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-custom-prometheus-ca\") pod \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658691 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-logs\") pod \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658807 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzxs\" (UniqueName: \"kubernetes.io/projected/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-kube-api-access-zzzxs\") pod \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-logs\") pod \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658898 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-combined-ca-bundle\") pod \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.658939 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5xp5\" (UniqueName: \"kubernetes.io/projected/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-kube-api-access-q5xp5\") pod \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659034 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-config-data\") pod \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659071 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-scripts\") pod \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\" (UID: \"03cbd43b-bc5a-4954-aa6f-1cb9440076a9\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659109 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-config-data\") pod \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\" (UID: \"9ef5643d-2fd2-478a-98bd-ed6217fa9b32\") " Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659634 4799 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659663 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmzlp\" (UniqueName: \"kubernetes.io/projected/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-kube-api-access-vmzlp\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659676 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.659689 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.660958 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-logs" (OuterVolumeSpecName: "logs") pod "03cbd43b-bc5a-4954-aa6f-1cb9440076a9" (UID: "03cbd43b-bc5a-4954-aa6f-1cb9440076a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.663665 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-logs" (OuterVolumeSpecName: "logs") pod "9ef5643d-2fd2-478a-98bd-ed6217fa9b32" (UID: "9ef5643d-2fd2-478a-98bd-ed6217fa9b32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.674556 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea741e8-2ce9-47a5-a56f-c4ede0af0124" (UID: "2ea741e8-2ce9-47a5-a56f-c4ede0af0124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.681314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-kube-api-access-zzzxs" (OuterVolumeSpecName: "kube-api-access-zzzxs") pod "9ef5643d-2fd2-478a-98bd-ed6217fa9b32" (UID: "9ef5643d-2fd2-478a-98bd-ed6217fa9b32"). InnerVolumeSpecName "kube-api-access-zzzxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.683486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-kube-api-access-q5xp5" (OuterVolumeSpecName: "kube-api-access-q5xp5") pod "03cbd43b-bc5a-4954-aa6f-1cb9440076a9" (UID: "03cbd43b-bc5a-4954-aa6f-1cb9440076a9"). InnerVolumeSpecName "kube-api-access-q5xp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.748768 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-scripts" (OuterVolumeSpecName: "scripts") pod "03cbd43b-bc5a-4954-aa6f-1cb9440076a9" (UID: "03cbd43b-bc5a-4954-aa6f-1cb9440076a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.761768 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.761793 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzzxs\" (UniqueName: \"kubernetes.io/projected/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-kube-api-access-zzzxs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.761802 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.761810 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5xp5\" (UniqueName: \"kubernetes.io/projected/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-kube-api-access-q5xp5\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.761820 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.761828 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.861138 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-config-data" (OuterVolumeSpecName: "config-data") pod "2ea741e8-2ce9-47a5-a56f-c4ede0af0124" (UID: "2ea741e8-2ce9-47a5-a56f-c4ede0af0124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.869844 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea741e8-2ce9-47a5-a56f-c4ede0af0124-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.929847 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9ef5643d-2fd2-478a-98bd-ed6217fa9b32" (UID: "9ef5643d-2fd2-478a-98bd-ed6217fa9b32"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.972185 4799 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.981514 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03cbd43b-bc5a-4954-aa6f-1cb9440076a9" (UID: "03cbd43b-bc5a-4954-aa6f-1cb9440076a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.987533 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ef5643d-2fd2-478a-98bd-ed6217fa9b32" (UID: "9ef5643d-2fd2-478a-98bd-ed6217fa9b32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:54 crc kubenswrapper[4799]: I0216 12:51:54.994768 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.000157 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-config-data" (OuterVolumeSpecName: "config-data") pod "03cbd43b-bc5a-4954-aa6f-1cb9440076a9" (UID: "03cbd43b-bc5a-4954-aa6f-1cb9440076a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.088445 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-config-data" (OuterVolumeSpecName: "config-data") pod "9ef5643d-2fd2-478a-98bd-ed6217fa9b32" (UID: "9ef5643d-2fd2-478a-98bd-ed6217fa9b32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.091238 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rczq6" event={"ID":"03cbd43b-bc5a-4954-aa6f-1cb9440076a9","Type":"ContainerDied","Data":"9a088da99d625b603d4ac3f727e98ea97071c1017b56f9a02ababfc66c897ae1"} Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.091299 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a088da99d625b603d4ac3f727e98ea97071c1017b56f9a02ababfc66c897ae1" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.091548 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rczq6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.103479 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94c85d75f-kbj7j" event={"ID":"a74ff520-0a2d-4853-9070-fdf3f2aa7a47","Type":"ContainerStarted","Data":"c0260f0c60d8d558529df01b63ccf5ac23d9148da125d1a9411328b50bb2608c"} Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.103566 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114185 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgzm8" event={"ID":"2ea741e8-2ce9-47a5-a56f-c4ede0af0124","Type":"ContainerDied","Data":"3c767225ae4adee9d73ebfc933753e712ecc84983b9677d6ba07004bbf8264db"} Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114246 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c767225ae4adee9d73ebfc933753e712ecc84983b9677d6ba07004bbf8264db" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114357 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgzm8" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114753 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-swift-storage-0\") pod \"c52c4130-5b91-4e36-ad36-8333675ee0a4\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114808 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-svc\") pod \"c52c4130-5b91-4e36-ad36-8333675ee0a4\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114839 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-sb\") pod \"c52c4130-5b91-4e36-ad36-8333675ee0a4\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.114873 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw4km\" (UniqueName: \"kubernetes.io/projected/c52c4130-5b91-4e36-ad36-8333675ee0a4-kube-api-access-cw4km\") pod \"c52c4130-5b91-4e36-ad36-8333675ee0a4\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.146326 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-config\") pod \"c52c4130-5b91-4e36-ad36-8333675ee0a4\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.146386 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-nb\") pod \"c52c4130-5b91-4e36-ad36-8333675ee0a4\" (UID: \"c52c4130-5b91-4e36-ad36-8333675ee0a4\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.146097 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-94c85d75f-kbj7j" podStartSLOduration=14.146073817 podStartE2EDuration="14.146073817s" podCreationTimestamp="2026-02-16 12:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:55.130420327 +0000 UTC m=+1220.723435661" watchObservedRunningTime="2026-02-16 12:51:55.146073817 +0000 UTC m=+1220.739089151" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.130914 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9ef5643d-2fd2-478a-98bd-ed6217fa9b32","Type":"ContainerDied","Data":"1e8d181ada264ffef02b5cab01376503beca7cc826cfe8fae6e69c81f27256d8"} Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.148212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" event={"ID":"c52c4130-5b91-4e36-ad36-8333675ee0a4","Type":"ContainerDied","Data":"169b9793d6a9c9cfa12e1d63816e246bc8b0611f479151090593dbaac9f089a7"} Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.148258 4799 scope.go:117] "RemoveContainer" containerID="202bd145655cd8f66c450813168ac3e8f765db9d3da3e72ce71b15bb77a822b2" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.148530 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87bb67d67-4q44z" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.131025 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.153448 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.154684 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.155586 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef5643d-2fd2-478a-98bd-ed6217fa9b32-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.157020 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cbd43b-bc5a-4954-aa6f-1cb9440076a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.164594 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52c4130-5b91-4e36-ad36-8333675ee0a4-kube-api-access-cw4km" (OuterVolumeSpecName: "kube-api-access-cw4km") pod "c52c4130-5b91-4e36-ad36-8333675ee0a4" (UID: "c52c4130-5b91-4e36-ad36-8333675ee0a4"). InnerVolumeSpecName "kube-api-access-cw4km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.262280 4799 scope.go:117] "RemoveContainer" containerID="4efd09e53f7a8965112f9e61bdb188cf985320500ec87edfba54ba70c12e0155" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.263842 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw4km\" (UniqueName: \"kubernetes.io/projected/c52c4130-5b91-4e36-ad36-8333675ee0a4-kube-api-access-cw4km\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.264335 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.277209 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.306803 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.307378 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="init" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.307397 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="init" Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.307542 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="dnsmasq-dns" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.307556 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="dnsmasq-dns" Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.307563 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea741e8-2ce9-47a5-a56f-c4ede0af0124" containerName="keystone-bootstrap" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.307570 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea741e8-2ce9-47a5-a56f-c4ede0af0124" containerName="keystone-bootstrap" Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.307633 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" containerName="watcher-decision-engine" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.307642 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" containerName="watcher-decision-engine" Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.307669 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03cbd43b-bc5a-4954-aa6f-1cb9440076a9" containerName="placement-db-sync" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.307696 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="03cbd43b-bc5a-4954-aa6f-1cb9440076a9" containerName="placement-db-sync" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.308095 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea741e8-2ce9-47a5-a56f-c4ede0af0124" containerName="keystone-bootstrap" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.308115 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" containerName="watcher-decision-engine" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.308155 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="03cbd43b-bc5a-4954-aa6f-1cb9440076a9" containerName="placement-db-sync" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.308178 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" containerName="dnsmasq-dns" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.309791 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.315134 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.316405 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.401505 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c52c4130-5b91-4e36-ad36-8333675ee0a4" (UID: "c52c4130-5b91-4e36-ad36-8333675ee0a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.401511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-config" (OuterVolumeSpecName: "config") pod "c52c4130-5b91-4e36-ad36-8333675ee0a4" (UID: "c52c4130-5b91-4e36-ad36-8333675ee0a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.435522 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c52c4130-5b91-4e36-ad36-8333675ee0a4" (UID: "c52c4130-5b91-4e36-ad36-8333675ee0a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.445013 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c52c4130-5b91-4e36-ad36-8333675ee0a4" (UID: "c52c4130-5b91-4e36-ad36-8333675ee0a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.449578 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c52c4130-5b91-4e36-ad36-8333675ee0a4" (UID: "c52c4130-5b91-4e36-ad36-8333675ee0a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.466849 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp8r9\" (UniqueName: \"kubernetes.io/projected/89824920-bcd3-4640-b27b-68554fad00bb-kube-api-access-cp8r9\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.466940 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.466971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89824920-bcd3-4640-b27b-68554fad00bb-logs\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467065 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-config-data\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467113 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467282 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467300 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467312 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467323 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.467334 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c52c4130-5b91-4e36-ad36-8333675ee0a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.467720 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ef5643d_2fd2_478a_98bd_ed6217fa9b32.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ef5643d_2fd2_478a_98bd_ed6217fa9b32.slice/crio-1e8d181ada264ffef02b5cab01376503beca7cc826cfe8fae6e69c81f27256d8\": RecentStats: unable to find data in memory cache]" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.553299 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.570410 4799 scope.go:117] "RemoveContainer" containerID="1ece960357bedd0051760d0fda5e677e707f2552a35c5e1ad02cb2d5bda1517b" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.570455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.570562 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp8r9\" (UniqueName: \"kubernetes.io/projected/89824920-bcd3-4640-b27b-68554fad00bb-kube-api-access-cp8r9\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.570618 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.570647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89824920-bcd3-4640-b27b-68554fad00bb-logs\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.570720 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-config-data\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.571832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89824920-bcd3-4640-b27b-68554fad00bb-logs\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.594533 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.599207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.614799 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-config-data\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.654016 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp8r9\" (UniqueName: \"kubernetes.io/projected/89824920-bcd3-4640-b27b-68554fad00bb-kube-api-access-cp8r9\") pod \"watcher-decision-engine-0\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.671529 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-config-data\") pod \"b5153cc1-228a-4731-adc9-dbdde3ae1661\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.671643 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-custom-prometheus-ca\") pod \"b5153cc1-228a-4731-adc9-dbdde3ae1661\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.671670 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5153cc1-228a-4731-adc9-dbdde3ae1661-logs\") pod \"b5153cc1-228a-4731-adc9-dbdde3ae1661\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.671774 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-combined-ca-bundle\") pod \"b5153cc1-228a-4731-adc9-dbdde3ae1661\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.671814 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92w6\" (UniqueName: \"kubernetes.io/projected/b5153cc1-228a-4731-adc9-dbdde3ae1661-kube-api-access-h92w6\") pod \"b5153cc1-228a-4731-adc9-dbdde3ae1661\" (UID: \"b5153cc1-228a-4731-adc9-dbdde3ae1661\") " Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.672859 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5153cc1-228a-4731-adc9-dbdde3ae1661-logs" (OuterVolumeSpecName: "logs") pod "b5153cc1-228a-4731-adc9-dbdde3ae1661" (UID: "b5153cc1-228a-4731-adc9-dbdde3ae1661"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.677478 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5153cc1-228a-4731-adc9-dbdde3ae1661-kube-api-access-h92w6" (OuterVolumeSpecName: "kube-api-access-h92w6") pod "b5153cc1-228a-4731-adc9-dbdde3ae1661" (UID: "b5153cc1-228a-4731-adc9-dbdde3ae1661"). InnerVolumeSpecName "kube-api-access-h92w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.755877 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b5153cc1-228a-4731-adc9-dbdde3ae1661" (UID: "b5153cc1-228a-4731-adc9-dbdde3ae1661"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.774560 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92w6\" (UniqueName: \"kubernetes.io/projected/b5153cc1-228a-4731-adc9-dbdde3ae1661-kube-api-access-h92w6\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.774601 4799 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.774612 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5153cc1-228a-4731-adc9-dbdde3ae1661-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.779419 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74bd488478-wqpd6"] Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.779980 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.780000 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api" Feb 16 12:51:55 crc kubenswrapper[4799]: E0216 12:51:55.780012 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api-log" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.780019 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api-log" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.780245 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api-log" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.780260 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" containerName="watcher-api" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.781009 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.795689 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qqmr2" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.795857 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.795991 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.796045 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.796091 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.804230 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5153cc1-228a-4731-adc9-dbdde3ae1661" (UID: "b5153cc1-228a-4731-adc9-dbdde3ae1661"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.804411 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.834269 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74bd488478-wqpd6"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.839614 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-config-data" (OuterVolumeSpecName: "config-data") pod "b5153cc1-228a-4731-adc9-dbdde3ae1661" (UID: "b5153cc1-228a-4731-adc9-dbdde3ae1661"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.846461 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8cc8d798d-nqmvr"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.848245 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.853489 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.854189 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.854468 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.854706 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9hnkl" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.854835 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.870961 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.876860 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-internal-tls-certs\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.882654 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8cc8d798d-nqmvr"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.883909 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-credential-keys\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884188 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-combined-ca-bundle\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884224 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-config-data\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884280 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-scripts\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884297 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqgtr\" (UniqueName: \"kubernetes.io/projected/f3bee5f6-a064-4641-9a90-de58c60eb3aa-kube-api-access-lqgtr\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884405 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-public-tls-certs\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884476 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-fernet-keys\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884539 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.884550 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5153cc1-228a-4731-adc9-dbdde3ae1661-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.938196 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87bb67d67-4q44z"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.966797 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-87bb67d67-4q44z"] Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.985979 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-internal-tls-certs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986048 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45g8w\" (UniqueName: \"kubernetes.io/projected/75a3aef5-ff55-4650-93f7-93f79bd441ca-kube-api-access-45g8w\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986068 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-scripts\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986095 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-public-tls-certs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986112 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a3aef5-ff55-4650-93f7-93f79bd441ca-logs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986156 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-combined-ca-bundle\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986174 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-combined-ca-bundle\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986193 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-config-data\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986221 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-scripts\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqgtr\" (UniqueName: \"kubernetes.io/projected/f3bee5f6-a064-4641-9a90-de58c60eb3aa-kube-api-access-lqgtr\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986257 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-config-data\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986285 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-public-tls-certs\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-fernet-keys\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986342 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-internal-tls-certs\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.986388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-credential-keys\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:55 crc kubenswrapper[4799]: I0216 12:51:55.994775 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-internal-tls-certs\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.000916 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-config-data\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.005791 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-public-tls-certs\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.005878 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-scripts\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.006371 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-credential-keys\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.009459 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-combined-ca-bundle\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.012019 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3bee5f6-a064-4641-9a90-de58c60eb3aa-fernet-keys\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.025691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqgtr\" (UniqueName: \"kubernetes.io/projected/f3bee5f6-a064-4641-9a90-de58c60eb3aa-kube-api-access-lqgtr\") pod \"keystone-74bd488478-wqpd6\" (UID: \"f3bee5f6-a064-4641-9a90-de58c60eb3aa\") " pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087656 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-internal-tls-certs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087705 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45g8w\" (UniqueName: \"kubernetes.io/projected/75a3aef5-ff55-4650-93f7-93f79bd441ca-kube-api-access-45g8w\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087730 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-scripts\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087759 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-public-tls-certs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087780 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a3aef5-ff55-4650-93f7-93f79bd441ca-logs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087804 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-combined-ca-bundle\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.087834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-config-data\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.088936 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a3aef5-ff55-4650-93f7-93f79bd441ca-logs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.093036 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-scripts\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.093937 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-internal-tls-certs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.095310 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-combined-ca-bundle\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.097819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-config-data\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.109704 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-public-tls-certs\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.110833 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45g8w\" (UniqueName: \"kubernetes.io/projected/75a3aef5-ff55-4650-93f7-93f79bd441ca-kube-api-access-45g8w\") pod \"placement-8cc8d798d-nqmvr\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.131667 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.172104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.219335 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f58d8f5db-4k8dn"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.224798 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.392583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e71f22a-250c-48e2-8309-7dfeb1325a2b","Type":"ContainerStarted","Data":"c634a1be584089049bd50768fd674977a52cd6d3f29581008e3014fcac6db1c0"} Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.401647 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b5153cc1-228a-4731-adc9-dbdde3ae1661","Type":"ContainerDied","Data":"a62844b131fc29879a8eef291394f7f02f24e121a3f85fefe61182edc5644bae"} Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.401721 4799 scope.go:117] "RemoveContainer" containerID="56fc74aa295ab855de6e18f8ece981c0446d6ea4735b98e283676eade66da9b4" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.401965 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435055 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c303ca-c915-4f80-90b2-5e23882687b5-logs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435111 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-internal-tls-certs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435198 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-public-tls-certs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435249 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-config-data\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435397 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-combined-ca-bundle\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435524 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-scripts\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.435554 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4fj8\" (UniqueName: \"kubernetes.io/projected/d2c303ca-c915-4f80-90b2-5e23882687b5-kube-api-access-k4fj8\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.439584 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2bbw" event={"ID":"e821341e-3e99-4606-a96d-00adad2f39fb","Type":"ContainerStarted","Data":"91337eefa295f64051763829d7f6722f895d9a52e33c40165438fd5a03064cd4"} Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.442472 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f58d8f5db-4k8dn"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.518684 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m5dfr" event={"ID":"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930","Type":"ContainerStarted","Data":"497721a037daa43af30da6128b2c50671ea4cdfc4bf35f240def5332dea09e29"} Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.522627 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x2bbw" podStartSLOduration=4.274935378 podStartE2EDuration="57.522606764s" podCreationTimestamp="2026-02-16 12:50:59 +0000 UTC" firstStartedPulling="2026-02-16 12:51:01.590772748 +0000 UTC m=+1167.183788082" lastFinishedPulling="2026-02-16 12:51:54.838444134 +0000 UTC m=+1220.431459468" observedRunningTime="2026-02-16 12:51:56.520280677 +0000 UTC m=+1222.113296011" watchObservedRunningTime="2026-02-16 12:51:56.522606764 +0000 UTC m=+1222.115622088" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.537844 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-combined-ca-bundle\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.537989 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-scripts\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.538018 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4fj8\" (UniqueName: \"kubernetes.io/projected/d2c303ca-c915-4f80-90b2-5e23882687b5-kube-api-access-k4fj8\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.538070 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c303ca-c915-4f80-90b2-5e23882687b5-logs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.538101 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-internal-tls-certs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.538156 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-public-tls-certs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.538195 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-config-data\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.559070 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c303ca-c915-4f80-90b2-5e23882687b5-logs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.587203 4799 scope.go:117] "RemoveContainer" containerID="0bb9fc7cc62ce198e859a53c73fd3cd63a255007e5b8dd27820c6ba9249bbfed" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.608953 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-m5dfr" podStartSLOduration=5.377747833 podStartE2EDuration="58.608935415s" podCreationTimestamp="2026-02-16 12:50:58 +0000 UTC" firstStartedPulling="2026-02-16 12:51:01.421281476 +0000 UTC m=+1167.014296810" lastFinishedPulling="2026-02-16 12:51:54.652469058 +0000 UTC m=+1220.245484392" observedRunningTime="2026-02-16 12:51:56.593894343 +0000 UTC m=+1222.186909677" watchObservedRunningTime="2026-02-16 12:51:56.608935415 +0000 UTC m=+1222.201950749" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.701478 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-combined-ca-bundle\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.702176 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-public-tls-certs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.745022 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-scripts\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.751949 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4fj8\" (UniqueName: \"kubernetes.io/projected/d2c303ca-c915-4f80-90b2-5e23882687b5-kube-api-access-k4fj8\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.763423 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-internal-tls-certs\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.767566 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.782357 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c303ca-c915-4f80-90b2-5e23882687b5-config-data\") pod \"placement-6f58d8f5db-4k8dn\" (UID: \"d2c303ca-c915-4f80-90b2-5e23882687b5\") " pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.782468 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.806912 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.828544 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.828663 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.847399 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.848495 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.848681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.848713 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-config-data\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.848759 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dddb140-3f08-4b16-97bf-be71806e7add-logs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.848784 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.848815 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs22d\" (UniqueName: \"kubernetes.io/projected/9dddb140-3f08-4b16-97bf-be71806e7add-kube-api-access-bs22d\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.850296 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.850392 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.850728 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.940064 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.951961 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.952014 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-config-data\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.952080 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dddb140-3f08-4b16-97bf-be71806e7add-logs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.953316 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dddb140-3f08-4b16-97bf-be71806e7add-logs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.953432 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.953495 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs22d\" (UniqueName: \"kubernetes.io/projected/9dddb140-3f08-4b16-97bf-be71806e7add-kube-api-access-bs22d\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.953831 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.953902 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.958314 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.959568 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.959940 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-config-data\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.960304 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:56 crc kubenswrapper[4799]: I0216 12:51:56.976207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9dddb140-3f08-4b16-97bf-be71806e7add-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.003018 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.003044 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs22d\" (UniqueName: \"kubernetes.io/projected/9dddb140-3f08-4b16-97bf-be71806e7add-kube-api-access-bs22d\") pod \"watcher-api-0\" (UID: \"9dddb140-3f08-4b16-97bf-be71806e7add\") " pod="openstack/watcher-api-0" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.166767 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef5643d-2fd2-478a-98bd-ed6217fa9b32" path="/var/lib/kubelet/pods/9ef5643d-2fd2-478a-98bd-ed6217fa9b32/volumes" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.167663 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5153cc1-228a-4731-adc9-dbdde3ae1661" path="/var/lib/kubelet/pods/b5153cc1-228a-4731-adc9-dbdde3ae1661/volumes" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.168444 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52c4130-5b91-4e36-ad36-8333675ee0a4" path="/var/lib/kubelet/pods/c52c4130-5b91-4e36-ad36-8333675ee0a4/volumes" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.169989 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.402326 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74bd488478-wqpd6"] Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.603649 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerStarted","Data":"1bb4529c89b693c7cc7a09aad42d8ee033c49094a16b88a5d9b2c83932b8094b"} Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.604261 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerStarted","Data":"23ab338826720b60112701afacadbe59b97e0574d958aeed561e5c4e6cd4c569"} Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.615414 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74bd488478-wqpd6" event={"ID":"f3bee5f6-a064-4641-9a90-de58c60eb3aa","Type":"ContainerStarted","Data":"641541a0d12018d68562e682beca0aa8311f8c5f0c07a1a479cb97892c69b4a5"} Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.619100 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8cc8d798d-nqmvr"] Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.680469 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.680442104 podStartE2EDuration="2.680442104s" podCreationTimestamp="2026-02-16 12:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:57.652747738 +0000 UTC m=+1223.245763182" watchObservedRunningTime="2026-02-16 12:51:57.680442104 +0000 UTC m=+1223.273457438" Feb 16 12:51:57 crc kubenswrapper[4799]: I0216 12:51:57.717042 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f58d8f5db-4k8dn"] Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.011760 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.647274 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74bd488478-wqpd6" event={"ID":"f3bee5f6-a064-4641-9a90-de58c60eb3aa","Type":"ContainerStarted","Data":"b049e88293e24ce5dee0ccf2767fb2ba3d89c587b065b64b67b9f9bb050844a9"} Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.648236 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.656934 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc8d798d-nqmvr" event={"ID":"75a3aef5-ff55-4650-93f7-93f79bd441ca","Type":"ContainerStarted","Data":"af6ee9d45905c86611fe341baff2696cb44b3a6d5d3b655f2c263c4abedae3e1"} Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.656997 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc8d798d-nqmvr" event={"ID":"75a3aef5-ff55-4650-93f7-93f79bd441ca","Type":"ContainerStarted","Data":"7f15e76afa9adbc362d3402cc00eb78077786b8a34e6c349593ec8ccddbacfe0"} Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.664344 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9dddb140-3f08-4b16-97bf-be71806e7add","Type":"ContainerStarted","Data":"de6c7264a9a3d61346b9f5a11797f472a882674db89973b9ca4800d14bc6e702"} Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.664404 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9dddb140-3f08-4b16-97bf-be71806e7add","Type":"ContainerStarted","Data":"d698c918b786a5fe0e64c4feca51f2935ae4f9b5f22d62747f84eb7bb51db395"} Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.673465 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f58d8f5db-4k8dn" event={"ID":"d2c303ca-c915-4f80-90b2-5e23882687b5","Type":"ContainerStarted","Data":"7a8d17dba53b366445f15d3404ec63d8906c50419abf2b982603e4b69be63015"} Feb 16 12:51:58 crc kubenswrapper[4799]: I0216 12:51:58.673552 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f58d8f5db-4k8dn" event={"ID":"d2c303ca-c915-4f80-90b2-5e23882687b5","Type":"ContainerStarted","Data":"6bc3f95d2fd0598664563259f14a25b1881a9cbe02aa97e3310445d4aa7163ec"} Feb 16 12:51:59 crc kubenswrapper[4799]: E0216 12:51:59.273555 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:59 crc kubenswrapper[4799]: E0216 12:51:59.275200 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:59 crc kubenswrapper[4799]: E0216 12:51:59.276948 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:51:59 crc kubenswrapper[4799]: E0216 12:51:59.276991 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.695336 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9dddb140-3f08-4b16-97bf-be71806e7add","Type":"ContainerStarted","Data":"b2c713e5aadb3b166b3702f5728492edf5bb41fae3cac37af8eee2a8ed125a75"} Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.695724 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.710224 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f58d8f5db-4k8dn" event={"ID":"d2c303ca-c915-4f80-90b2-5e23882687b5","Type":"ContainerStarted","Data":"8d8e16e1c1bfe44148175c10b5d0d461165fdda6d3a1b75a834172f601310ba5"} Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.710836 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.710869 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.717478 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc8d798d-nqmvr" event={"ID":"75a3aef5-ff55-4650-93f7-93f79bd441ca","Type":"ContainerStarted","Data":"326df233a027b8bc25c00fcf9dc85645b22527b6b8c596439f9b3758f693fca1"} Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.717545 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.717566 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.751477 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74bd488478-wqpd6" podStartSLOduration=4.7514594930000005 podStartE2EDuration="4.751459493s" podCreationTimestamp="2026-02-16 12:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:58.679663635 +0000 UTC m=+1224.272678959" watchObservedRunningTime="2026-02-16 12:51:59.751459493 +0000 UTC m=+1225.344474817" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.752536 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.752529193 podStartE2EDuration="3.752529193s" podCreationTimestamp="2026-02-16 12:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:59.733492486 +0000 UTC m=+1225.326507830" watchObservedRunningTime="2026-02-16 12:51:59.752529193 +0000 UTC m=+1225.345544527" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.774226 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f58d8f5db-4k8dn" podStartSLOduration=3.774204626 podStartE2EDuration="3.774204626s" podCreationTimestamp="2026-02-16 12:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:59.769083909 +0000 UTC m=+1225.362099243" watchObservedRunningTime="2026-02-16 12:51:59.774204626 +0000 UTC m=+1225.367219960" Feb 16 12:51:59 crc kubenswrapper[4799]: I0216 12:51:59.836746 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8cc8d798d-nqmvr" podStartSLOduration=4.836730354 podStartE2EDuration="4.836730354s" podCreationTimestamp="2026-02-16 12:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:51:59.834388006 +0000 UTC m=+1225.427403340" watchObservedRunningTime="2026-02-16 12:51:59.836730354 +0000 UTC m=+1225.429745678" Feb 16 12:52:01 crc kubenswrapper[4799]: I0216 12:52:01.055170 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:52:01 crc kubenswrapper[4799]: I0216 12:52:01.438907 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:52:01 crc kubenswrapper[4799]: I0216 12:52:01.793139 4799 generic.go:334] "Generic (PLEG): container finished" podID="89824920-bcd3-4640-b27b-68554fad00bb" containerID="1bb4529c89b693c7cc7a09aad42d8ee033c49094a16b88a5d9b2c83932b8094b" exitCode=1 Feb 16 12:52:01 crc kubenswrapper[4799]: I0216 12:52:01.793808 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerDied","Data":"1bb4529c89b693c7cc7a09aad42d8ee033c49094a16b88a5d9b2c83932b8094b"} Feb 16 12:52:01 crc kubenswrapper[4799]: I0216 12:52:01.794240 4799 scope.go:117] "RemoveContainer" containerID="1bb4529c89b693c7cc7a09aad42d8ee033c49094a16b88a5d9b2c83932b8094b" Feb 16 12:52:02 crc kubenswrapper[4799]: I0216 12:52:02.170986 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 12:52:02 crc kubenswrapper[4799]: I0216 12:52:02.171066 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:52:02 crc kubenswrapper[4799]: I0216 12:52:02.502542 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 12:52:03 crc kubenswrapper[4799]: I0216 12:52:03.194837 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b64799464-xwrv9" Feb 16 12:52:03 crc kubenswrapper[4799]: I0216 12:52:03.272659 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6746fc7768-pc68r"] Feb 16 12:52:03 crc kubenswrapper[4799]: I0216 12:52:03.272951 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon-log" containerID="cri-o://6a7d9541f9ee6c4936a4ca92c8e7cbe7f3befe853e369e78b7a6a37ba1b1f36a" gracePeriod=30 Feb 16 12:52:03 crc kubenswrapper[4799]: I0216 12:52:03.273116 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" containerID="cri-o://9e98b1e0776c21e798c8ec0399674b680e3006908cb7040c91104594062fa43f" gracePeriod=30 Feb 16 12:52:03 crc kubenswrapper[4799]: I0216 12:52:03.285349 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.171:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 16 12:52:03 crc kubenswrapper[4799]: I0216 12:52:03.299664 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.171:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 16 12:52:04 crc kubenswrapper[4799]: E0216 12:52:04.268506 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:52:04 crc kubenswrapper[4799]: E0216 12:52:04.270937 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:52:04 crc kubenswrapper[4799]: E0216 12:52:04.272724 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:52:04 crc kubenswrapper[4799]: E0216 12:52:04.272864 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:52:05 crc kubenswrapper[4799]: I0216 12:52:05.873259 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:05 crc kubenswrapper[4799]: I0216 12:52:05.874145 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:06 crc kubenswrapper[4799]: I0216 12:52:06.227183 4799 generic.go:334] "Generic (PLEG): container finished" podID="e821341e-3e99-4606-a96d-00adad2f39fb" containerID="91337eefa295f64051763829d7f6722f895d9a52e33c40165438fd5a03064cd4" exitCode=0 Feb 16 12:52:06 crc kubenswrapper[4799]: I0216 12:52:06.227280 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2bbw" event={"ID":"e821341e-3e99-4606-a96d-00adad2f39fb","Type":"ContainerDied","Data":"91337eefa295f64051763829d7f6722f895d9a52e33c40165438fd5a03064cd4"} Feb 16 12:52:06 crc kubenswrapper[4799]: I0216 12:52:06.232507 4799 generic.go:334] "Generic (PLEG): container finished" podID="5357e09b-7a51-4687-be1c-99a473120c90" containerID="9e98b1e0776c21e798c8ec0399674b680e3006908cb7040c91104594062fa43f" exitCode=0 Feb 16 12:52:06 crc kubenswrapper[4799]: I0216 12:52:06.232653 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746fc7768-pc68r" event={"ID":"5357e09b-7a51-4687-be1c-99a473120c90","Type":"ContainerDied","Data":"9e98b1e0776c21e798c8ec0399674b680e3006908cb7040c91104594062fa43f"} Feb 16 12:52:07 crc kubenswrapper[4799]: I0216 12:52:07.170724 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 16 12:52:07 crc kubenswrapper[4799]: I0216 12:52:07.208503 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 16 12:52:07 crc kubenswrapper[4799]: I0216 12:52:07.259629 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 12:52:08 crc kubenswrapper[4799]: I0216 12:52:08.024487 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.171:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.171:8443: connect: connection refused" Feb 16 12:52:08 crc kubenswrapper[4799]: I0216 12:52:08.998719 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.256005 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-94c85d75f-kbj7j"] Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.256272 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-94c85d75f-kbj7j" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-api" containerID="cri-o://33358521838a5cd4b967f887308e41468e348361bf89b876e838e9d59c5150a8" gracePeriod=30 Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.257023 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-94c85d75f-kbj7j" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-httpd" containerID="cri-o://c0260f0c60d8d558529df01b63ccf5ac23d9148da125d1a9411328b50bb2608c" gracePeriod=30 Feb 16 12:52:09 crc kubenswrapper[4799]: E0216 12:52:09.305335 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:52:09 crc kubenswrapper[4799]: E0216 12:52:09.325409 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.328162 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bd85f5c47-gbtmk"] Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.331334 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: E0216 12:52:09.348744 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 12:52:09 crc kubenswrapper[4799]: E0216 12:52:09.348813 4799 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.355619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bd85f5c47-gbtmk"] Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394251 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-ovndb-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394306 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-combined-ca-bundle\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394341 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-internal-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394404 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnf5x\" (UniqueName: \"kubernetes.io/projected/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-kube-api-access-lnf5x\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394446 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-httpd-config\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394477 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-public-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.394494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-config\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496099 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-public-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496173 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-config\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496260 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-ovndb-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-combined-ca-bundle\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496323 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-internal-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496388 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnf5x\" (UniqueName: \"kubernetes.io/projected/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-kube-api-access-lnf5x\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.496447 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-httpd-config\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.507915 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-public-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.509023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-ovndb-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.509045 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-combined-ca-bundle\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.509400 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-internal-tls-certs\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.516875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-httpd-config\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.516924 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-config\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.533203 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnf5x\" (UniqueName: \"kubernetes.io/projected/cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd-kube-api-access-lnf5x\") pod \"neutron-5bd85f5c47-gbtmk\" (UID: \"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd\") " pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.661907 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:09 crc kubenswrapper[4799]: I0216 12:52:09.742748 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-94c85d75f-kbj7j" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.179:9696/\": read tcp 10.217.0.2:60110->10.217.0.179:9696: read: connection reset by peer" Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.296777 4799 generic.go:334] "Generic (PLEG): container finished" podID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerID="c0260f0c60d8d558529df01b63ccf5ac23d9148da125d1a9411328b50bb2608c" exitCode=0 Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.296840 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94c85d75f-kbj7j" event={"ID":"a74ff520-0a2d-4853-9070-fdf3f2aa7a47","Type":"ContainerDied","Data":"c0260f0c60d8d558529df01b63ccf5ac23d9148da125d1a9411328b50bb2608c"} Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.887563 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.925372 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl52n\" (UniqueName: \"kubernetes.io/projected/e821341e-3e99-4606-a96d-00adad2f39fb-kube-api-access-sl52n\") pod \"e821341e-3e99-4606-a96d-00adad2f39fb\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.925415 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-combined-ca-bundle\") pod \"e821341e-3e99-4606-a96d-00adad2f39fb\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.925571 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-db-sync-config-data\") pod \"e821341e-3e99-4606-a96d-00adad2f39fb\" (UID: \"e821341e-3e99-4606-a96d-00adad2f39fb\") " Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.932043 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e821341e-3e99-4606-a96d-00adad2f39fb-kube-api-access-sl52n" (OuterVolumeSpecName: "kube-api-access-sl52n") pod "e821341e-3e99-4606-a96d-00adad2f39fb" (UID: "e821341e-3e99-4606-a96d-00adad2f39fb"). InnerVolumeSpecName "kube-api-access-sl52n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.966113 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e821341e-3e99-4606-a96d-00adad2f39fb" (UID: "e821341e-3e99-4606-a96d-00adad2f39fb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:10 crc kubenswrapper[4799]: I0216 12:52:10.973599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e821341e-3e99-4606-a96d-00adad2f39fb" (UID: "e821341e-3e99-4606-a96d-00adad2f39fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:11 crc kubenswrapper[4799]: I0216 12:52:11.027968 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl52n\" (UniqueName: \"kubernetes.io/projected/e821341e-3e99-4606-a96d-00adad2f39fb-kube-api-access-sl52n\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:11 crc kubenswrapper[4799]: I0216 12:52:11.027999 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:11 crc kubenswrapper[4799]: I0216 12:52:11.028008 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e821341e-3e99-4606-a96d-00adad2f39fb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:11 crc kubenswrapper[4799]: I0216 12:52:11.308499 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2bbw" event={"ID":"e821341e-3e99-4606-a96d-00adad2f39fb","Type":"ContainerDied","Data":"2859b4a8a026b718aff1b3509208d021d92122183b1274f26a6c39ef111fda96"} Feb 16 12:52:11 crc kubenswrapper[4799]: I0216 12:52:11.308931 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2bbw" Feb 16 12:52:11 crc kubenswrapper[4799]: I0216 12:52:11.308862 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2859b4a8a026b718aff1b3509208d021d92122183b1274f26a6c39ef111fda96" Feb 16 12:52:11 crc kubenswrapper[4799]: E0216 12:52:11.662515 4799 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 16 12:52:11 crc kubenswrapper[4799]: E0216 12:52:11.662775 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5ld4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3e71f22a-250c-48e2-8309-7dfeb1325a2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:52:11 crc kubenswrapper[4799]: E0216 12:52:11.664653 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.286986 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-56d6b7fd5c-s6xhs"] Feb 16 12:52:12 crc kubenswrapper[4799]: E0216 12:52:12.287905 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e821341e-3e99-4606-a96d-00adad2f39fb" containerName="barbican-db-sync" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.287924 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e821341e-3e99-4606-a96d-00adad2f39fb" containerName="barbican-db-sync" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.288203 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e821341e-3e99-4606-a96d-00adad2f39fb" containerName="barbican-db-sync" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.289553 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.337301 4799 generic.go:334] "Generic (PLEG): container finished" podID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" containerID="497721a037daa43af30da6128b2c50671ea4cdfc4bf35f240def5332dea09e29" exitCode=0 Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.337579 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m5dfr" event={"ID":"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930","Type":"ContainerDied","Data":"497721a037daa43af30da6128b2c50671ea4cdfc4bf35f240def5332dea09e29"} Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.337615 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-56d6b7fd5c-s6xhs"] Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.362189 4799 generic.go:334] "Generic (PLEG): container finished" podID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" exitCode=137 Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.362272 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92cefdaf-4a4b-4771-9b15-0666298881e8","Type":"ContainerDied","Data":"6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54"} Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.362487 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.362678 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9jzxp" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.370105 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.387594 4799 generic.go:334] "Generic (PLEG): container finished" podID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerID="33358521838a5cd4b967f887308e41468e348361bf89b876e838e9d59c5150a8" exitCode=0 Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.387681 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94c85d75f-kbj7j" event={"ID":"a74ff520-0a2d-4853-9070-fdf3f2aa7a47","Type":"ContainerDied","Data":"33358521838a5cd4b967f887308e41468e348361bf89b876e838e9d59c5150a8"} Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.422560 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="ceilometer-notification-agent" containerID="cri-o://cd5deb5fd3db077a1a851740fa75368f52abf00dac239f25d0939245a9dec90c" gracePeriod=30 Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.423909 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerStarted","Data":"2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1"} Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.423995 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="sg-core" containerID="cri-o://c634a1be584089049bd50768fd674977a52cd6d3f29581008e3014fcac6db1c0" gracePeriod=30 Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.449191 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5584d58cd8-z4cwc"] Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.450876 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.461920 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.488725 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-config-data-custom\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.488788 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-logs\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.488902 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cadefef-9278-4473-a8c8-97911ac9b269-logs\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.488922 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jhq\" (UniqueName: \"kubernetes.io/projected/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-kube-api-access-d2jhq\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.488994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-config-data\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.489052 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-config-data\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.489192 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-combined-ca-bundle\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.489246 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-combined-ca-bundle\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.489271 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kct\" (UniqueName: \"kubernetes.io/projected/6cadefef-9278-4473-a8c8-97911ac9b269-kube-api-access-b8kct\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.489391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-config-data-custom\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.568182 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5584d58cd8-z4cwc"] Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599181 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-config-data\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-combined-ca-bundle\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599332 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-combined-ca-bundle\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599351 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kct\" (UniqueName: \"kubernetes.io/projected/6cadefef-9278-4473-a8c8-97911ac9b269-kube-api-access-b8kct\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-config-data-custom\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599584 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-config-data-custom\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-logs\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599736 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cadefef-9278-4473-a8c8-97911ac9b269-logs\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599767 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jhq\" (UniqueName: \"kubernetes.io/projected/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-kube-api-access-d2jhq\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.599848 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-config-data\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.600885 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-logs\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.611687 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-combined-ca-bundle\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.612615 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cadefef-9278-4473-a8c8-97911ac9b269-logs\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.612679 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-656c667499-5d7pf"] Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.614477 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.621896 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-config-data-custom\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.642588 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-config-data\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.646358 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-config-data\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.650277 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-config-data-custom\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.670737 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kct\" (UniqueName: \"kubernetes.io/projected/6cadefef-9278-4473-a8c8-97911ac9b269-kube-api-access-b8kct\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.670876 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cadefef-9278-4473-a8c8-97911ac9b269-combined-ca-bundle\") pod \"barbican-keystone-listener-5584d58cd8-z4cwc\" (UID: \"6cadefef-9278-4473-a8c8-97911ac9b269\") " pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.703349 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-config\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.703434 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqb85\" (UniqueName: \"kubernetes.io/projected/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-kube-api-access-sqb85\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.703461 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-nb\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.703544 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-sb\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.703564 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-swift-storage-0\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.703602 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-svc\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.730556 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jhq\" (UniqueName: \"kubernetes.io/projected/99699fe4-f20c-42e0-9c4f-029b9ee24fdb-kube-api-access-d2jhq\") pod \"barbican-worker-56d6b7fd5c-s6xhs\" (UID: \"99699fe4-f20c-42e0-9c4f-029b9ee24fdb\") " pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.740199 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656c667499-5d7pf"] Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805453 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-svc\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805557 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-config\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805615 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqb85\" (UniqueName: \"kubernetes.io/projected/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-kube-api-access-sqb85\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805647 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-nb\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805745 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-swift-storage-0\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805765 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-sb\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.805931 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.807824 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-nb\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.808896 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-svc\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.809556 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-swift-storage-0\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.809866 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-sb\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.813509 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-config\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.841665 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqb85\" (UniqueName: \"kubernetes.io/projected/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-kube-api-access-sqb85\") pod \"dnsmasq-dns-656c667499-5d7pf\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.869047 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bd85f5c47-gbtmk"] Feb 16 12:52:12 crc kubenswrapper[4799]: I0216 12:52:12.931684 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.019166 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5dc4944754-qz6dk"] Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.020751 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.025581 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.079524 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5dc4944754-qz6dk"] Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.114751 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.114910 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zksmb\" (UniqueName: \"kubernetes.io/projected/91e5425c-df09-441e-99d7-43af068fc7b0-kube-api-access-zksmb\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.114947 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-combined-ca-bundle\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.114970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e5425c-df09-441e-99d7-43af068fc7b0-logs\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.115008 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data-custom\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.147262 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.215637 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.219614 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.236012 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-ovndb-tls-certs\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.236424 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-public-tls-certs\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.236626 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-httpd-config\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.237777 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-combined-ca-bundle\") pod \"92cefdaf-4a4b-4771-9b15-0666298881e8\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.238264 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rznvk\" (UniqueName: \"kubernetes.io/projected/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-kube-api-access-rznvk\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.238383 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-config-data\") pod \"92cefdaf-4a4b-4771-9b15-0666298881e8\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.238618 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-combined-ca-bundle\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.238732 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwk6d\" (UniqueName: \"kubernetes.io/projected/92cefdaf-4a4b-4771-9b15-0666298881e8-kube-api-access-fwk6d\") pod \"92cefdaf-4a4b-4771-9b15-0666298881e8\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.238856 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-config\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.238996 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92cefdaf-4a4b-4771-9b15-0666298881e8-logs\") pod \"92cefdaf-4a4b-4771-9b15-0666298881e8\" (UID: \"92cefdaf-4a4b-4771-9b15-0666298881e8\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.239083 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-internal-tls-certs\") pod \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\" (UID: \"a74ff520-0a2d-4853-9070-fdf3f2aa7a47\") " Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.239741 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zksmb\" (UniqueName: \"kubernetes.io/projected/91e5425c-df09-441e-99d7-43af068fc7b0-kube-api-access-zksmb\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.240043 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-combined-ca-bundle\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.240500 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e5425c-df09-441e-99d7-43af068fc7b0-logs\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.240603 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data-custom\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.240871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.260250 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92cefdaf-4a4b-4771-9b15-0666298881e8-logs" (OuterVolumeSpecName: "logs") pod "92cefdaf-4a4b-4771-9b15-0666298881e8" (UID: "92cefdaf-4a4b-4771-9b15-0666298881e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.263903 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e5425c-df09-441e-99d7-43af068fc7b0-logs\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.317699 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.333417 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-kube-api-access-rznvk" (OuterVolumeSpecName: "kube-api-access-rznvk") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "kube-api-access-rznvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.339023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data-custom\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.339193 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cefdaf-4a4b-4771-9b15-0666298881e8-kube-api-access-fwk6d" (OuterVolumeSpecName: "kube-api-access-fwk6d") pod "92cefdaf-4a4b-4771-9b15-0666298881e8" (UID: "92cefdaf-4a4b-4771-9b15-0666298881e8"). InnerVolumeSpecName "kube-api-access-fwk6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.340099 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.345686 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.345722 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rznvk\" (UniqueName: \"kubernetes.io/projected/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-kube-api-access-rznvk\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.345734 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwk6d\" (UniqueName: \"kubernetes.io/projected/92cefdaf-4a4b-4771-9b15-0666298881e8-kube-api-access-fwk6d\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.345743 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92cefdaf-4a4b-4771-9b15-0666298881e8-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.347404 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zksmb\" (UniqueName: \"kubernetes.io/projected/91e5425c-df09-441e-99d7-43af068fc7b0-kube-api-access-zksmb\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.349883 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-combined-ca-bundle\") pod \"barbican-api-5dc4944754-qz6dk\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.477733 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd85f5c47-gbtmk" event={"ID":"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd","Type":"ContainerStarted","Data":"46edb8e8f550679d4d4b4bfcbc2c7a51d4208c656701709db8e9beccd15943ad"} Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.478275 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92cefdaf-4a4b-4771-9b15-0666298881e8" (UID: "92cefdaf-4a4b-4771-9b15-0666298881e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.487415 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.488603 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.490481 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92cefdaf-4a4b-4771-9b15-0666298881e8","Type":"ContainerDied","Data":"12bb85416e229ec0e8cabae18b740175c437404c6ca26909f7a8ba86961d56a6"} Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.490532 4799 scope.go:117] "RemoveContainer" containerID="6c797bea62df6b1eae238e0fcb200808d922295750c0752d5c74dbb0e3476d54" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.490661 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.507305 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.527374 4799 generic.go:334] "Generic (PLEG): container finished" podID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerID="c634a1be584089049bd50768fd674977a52cd6d3f29581008e3014fcac6db1c0" exitCode=2 Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.527595 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e71f22a-250c-48e2-8309-7dfeb1325a2b","Type":"ContainerDied","Data":"c634a1be584089049bd50768fd674977a52cd6d3f29581008e3014fcac6db1c0"} Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.528703 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.542186 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94c85d75f-kbj7j" event={"ID":"a74ff520-0a2d-4853-9070-fdf3f2aa7a47","Type":"ContainerDied","Data":"32998adcda55817e532e4f80801fb8947fc6bf0328556c3d84a0bb3be7167f04"} Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.542280 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94c85d75f-kbj7j" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.542980 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-config" (OuterVolumeSpecName: "config") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.552073 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.552115 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.552145 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.552158 4799 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.552169 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.552767 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-config-data" (OuterVolumeSpecName: "config-data") pod "92cefdaf-4a4b-4771-9b15-0666298881e8" (UID: "92cefdaf-4a4b-4771-9b15-0666298881e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.569277 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a74ff520-0a2d-4853-9070-fdf3f2aa7a47" (UID: "a74ff520-0a2d-4853-9070-fdf3f2aa7a47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.576564 4799 scope.go:117] "RemoveContainer" containerID="c0260f0c60d8d558529df01b63ccf5ac23d9148da125d1a9411328b50bb2608c" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.662873 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cefdaf-4a4b-4771-9b15-0666298881e8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.663245 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74ff520-0a2d-4853-9070-fdf3f2aa7a47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.676951 4799 scope.go:117] "RemoveContainer" containerID="33358521838a5cd4b967f887308e41468e348361bf89b876e838e9d59c5150a8" Feb 16 12:52:13 crc kubenswrapper[4799]: I0216 12:52:13.822204 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-56d6b7fd5c-s6xhs"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.196073 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-94c85d75f-kbj7j"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.229191 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-94c85d75f-kbj7j"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.243407 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:52:14 crc kubenswrapper[4799]: W0216 12:52:14.261593 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb91b90a6_5bc0_4fb2_9d24_1b1c9badd203.slice/crio-3b513fa75d472e67c0a8fc7c61b72d0b5360ace2b4b3c57570bf0223cc2b25ec WatchSource:0}: Error finding container 3b513fa75d472e67c0a8fc7c61b72d0b5360ace2b4b3c57570bf0223cc2b25ec: Status 404 returned error can't find the container with id 3b513fa75d472e67c0a8fc7c61b72d0b5360ace2b4b3c57570bf0223cc2b25ec Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.272278 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.283670 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:52:14 crc kubenswrapper[4799]: E0216 12:52:14.284411 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.284431 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:52:14 crc kubenswrapper[4799]: E0216 12:52:14.284445 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-api" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.284452 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-api" Feb 16 12:52:14 crc kubenswrapper[4799]: E0216 12:52:14.284490 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-httpd" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.284496 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-httpd" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.284681 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" containerName="watcher-applier" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.284695 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-httpd" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.284714 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-api" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.285416 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.287557 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.297478 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5584d58cd8-z4cwc"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.299884 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.318110 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.357747 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-656c667499-5d7pf"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424066 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-db-sync-config-data\") pod \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424233 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-etc-machine-id\") pod \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dkjf\" (UniqueName: \"kubernetes.io/projected/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-kube-api-access-7dkjf\") pod \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424425 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-scripts\") pod \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424492 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-combined-ca-bundle\") pod \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-config-data\") pod \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\" (UID: \"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930\") " Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424828 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-logs\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.424860 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-config-data\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.425022 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.425050 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wxfl\" (UniqueName: \"kubernetes.io/projected/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-kube-api-access-8wxfl\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.432232 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5dc4944754-qz6dk"] Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.432620 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" (UID: "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.438595 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" (UID: "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.441314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-kube-api-access-7dkjf" (OuterVolumeSpecName: "kube-api-access-7dkjf") pod "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" (UID: "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930"). InnerVolumeSpecName "kube-api-access-7dkjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.443295 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-scripts" (OuterVolumeSpecName: "scripts") pod "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" (UID: "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.512724 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" (UID: "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.527493 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.527677 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wxfl\" (UniqueName: \"kubernetes.io/projected/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-kube-api-access-8wxfl\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.527772 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-logs\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.527871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-config-data\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.527991 4799 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.528059 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.528114 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dkjf\" (UniqueName: \"kubernetes.io/projected/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-kube-api-access-7dkjf\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.528193 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.528251 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.530051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-logs\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.540672 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-config-data" (OuterVolumeSpecName: "config-data") pod "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" (UID: "8e3d6bd7-bfe0-4951-8c70-ae25e5a07930"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.541762 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.544992 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wxfl\" (UniqueName: \"kubernetes.io/projected/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-kube-api-access-8wxfl\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.549224 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd018cf-77c0-4f89-a1b7-e821440b0fe1-config-data\") pod \"watcher-applier-0\" (UID: \"9bd018cf-77c0-4f89-a1b7-e821440b0fe1\") " pod="openstack/watcher-applier-0" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.580673 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" event={"ID":"6cadefef-9278-4473-a8c8-97911ac9b269","Type":"ContainerStarted","Data":"19238aa60f57d66e5044b2c11559003a28a8198808163b4f2cb1c1b72c549133"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.595004 4799 generic.go:334] "Generic (PLEG): container finished" podID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerID="cd5deb5fd3db077a1a851740fa75368f52abf00dac239f25d0939245a9dec90c" exitCode=0 Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.595091 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e71f22a-250c-48e2-8309-7dfeb1325a2b","Type":"ContainerDied","Data":"cd5deb5fd3db077a1a851740fa75368f52abf00dac239f25d0939245a9dec90c"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.598019 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dc4944754-qz6dk" event={"ID":"91e5425c-df09-441e-99d7-43af068fc7b0","Type":"ContainerStarted","Data":"dfeeac113d9d7621f53249bc4a9f0a1ee4bb8a4cfee47775f601fb6a6ede23e4"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.604523 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" event={"ID":"99699fe4-f20c-42e0-9c4f-029b9ee24fdb","Type":"ContainerStarted","Data":"bd45787c766e51485c316d712af7ac7596a0d49605dcb2f9e4d584a8e09b8794"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.606280 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m5dfr" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.606426 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m5dfr" event={"ID":"8e3d6bd7-bfe0-4951-8c70-ae25e5a07930","Type":"ContainerDied","Data":"fb6fe4d75932c786ab0845afe32b34ddd9f017e4863891e7562221bb38868fad"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.606449 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6fe4d75932c786ab0845afe32b34ddd9f017e4863891e7562221bb38868fad" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.611999 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd85f5c47-gbtmk" event={"ID":"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd","Type":"ContainerStarted","Data":"8257d2f5ef6f2b7d174a3eb0b9acd753ca234b7038fdf26f869af0472e8e310e"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.615323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656c667499-5d7pf" event={"ID":"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203","Type":"ContainerStarted","Data":"3b513fa75d472e67c0a8fc7c61b72d0b5360ace2b4b3c57570bf0223cc2b25ec"} Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.630546 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:14 crc kubenswrapper[4799]: I0216 12:52:14.631573 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.102625 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.123417 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:15 crc kubenswrapper[4799]: E0216 12:52:15.124172 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="sg-core" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.133678 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="sg-core" Feb 16 12:52:15 crc kubenswrapper[4799]: E0216 12:52:15.133763 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="ceilometer-notification-agent" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.133775 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="ceilometer-notification-agent" Feb 16 12:52:15 crc kubenswrapper[4799]: E0216 12:52:15.133850 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" containerName="cinder-db-sync" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.133861 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" containerName="cinder-db-sync" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.134345 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="ceilometer-notification-agent" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.134378 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" containerName="sg-core" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.134401 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" containerName="cinder-db-sync" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.144218 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.156909 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.157179 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tswfv" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.157332 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.157500 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.222912 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cefdaf-4a4b-4771-9b15-0666298881e8" path="/var/lib/kubelet/pods/92cefdaf-4a4b-4771-9b15-0666298881e8/volumes" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.223868 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" path="/var/lib/kubelet/pods/a74ff520-0a2d-4853-9070-fdf3f2aa7a47/volumes" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.260745 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.260789 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656c667499-5d7pf"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261577 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-run-httpd\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261615 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-log-httpd\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261644 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5ld4\" (UniqueName: \"kubernetes.io/projected/3e71f22a-250c-48e2-8309-7dfeb1325a2b-kube-api-access-c5ld4\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261668 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-config-data\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261722 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-scripts\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261818 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-sg-core-conf-yaml\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.261847 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-combined-ca-bundle\") pod \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\" (UID: \"3e71f22a-250c-48e2-8309-7dfeb1325a2b\") " Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.263215 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-scripts\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.263262 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.263277 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.263293 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp8rh\" (UniqueName: \"kubernetes.io/projected/4254e62b-6303-4b05-8d67-9b9090d9d757-kube-api-access-hp8rh\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.263377 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.272237 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.272391 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4254e62b-6303-4b05-8d67-9b9090d9d757-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.272845 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.278044 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.319392 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e71f22a-250c-48e2-8309-7dfeb1325a2b-kube-api-access-c5ld4" (OuterVolumeSpecName: "kube-api-access-c5ld4") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "kube-api-access-c5ld4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.324467 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-scripts" (OuterVolumeSpecName: "scripts") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.328036 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d546d59d7-9lr8f"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.329793 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.371807 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d546d59d7-9lr8f"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379035 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4254e62b-6303-4b05-8d67-9b9090d9d757-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379116 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-scripts\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379183 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379206 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379225 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp8rh\" (UniqueName: \"kubernetes.io/projected/4254e62b-6303-4b05-8d67-9b9090d9d757-kube-api-access-hp8rh\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379402 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e71f22a-250c-48e2-8309-7dfeb1325a2b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379417 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5ld4\" (UniqueName: \"kubernetes.io/projected/3e71f22a-250c-48e2-8309-7dfeb1325a2b-kube-api-access-c5ld4\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.379429 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.386921 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.389639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4254e62b-6303-4b05-8d67-9b9090d9d757-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.397427 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.397894 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.398301 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-scripts\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.399559 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.401787 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.402777 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.429913 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.430100 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp8rh\" (UniqueName: \"kubernetes.io/projected/4254e62b-6303-4b05-8d67-9b9090d9d757-kube-api-access-hp8rh\") pod \"cinder-scheduler-0\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.437328 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.452399 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.504478 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-swift-storage-0\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.504904 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.504953 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-scripts\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505032 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfff\" (UniqueName: \"kubernetes.io/projected/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-kube-api-access-4tfff\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505286 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505380 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-svc\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505496 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5aac42-012b-4b8f-8784-b62b5e4384e1-logs\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505525 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-sb\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505653 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5wh\" (UniqueName: \"kubernetes.io/projected/bd5aac42-012b-4b8f-8784-b62b5e4384e1-kube-api-access-cv5wh\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd5aac42-012b-4b8f-8784-b62b5e4384e1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505780 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505824 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-nb\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.505885 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-config\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.506065 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.506093 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.512057 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.529783 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-config-data" (OuterVolumeSpecName: "config-data") pod "3e71f22a-250c-48e2-8309-7dfeb1325a2b" (UID: "3e71f22a-250c-48e2-8309-7dfeb1325a2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610411 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5wh\" (UniqueName: \"kubernetes.io/projected/bd5aac42-012b-4b8f-8784-b62b5e4384e1-kube-api-access-cv5wh\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610488 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd5aac42-012b-4b8f-8784-b62b5e4384e1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610511 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-nb\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-config\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610593 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-swift-storage-0\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610645 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610660 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-scripts\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610724 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfff\" (UniqueName: \"kubernetes.io/projected/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-kube-api-access-4tfff\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610753 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.610961 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-svc\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.611008 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5aac42-012b-4b8f-8784-b62b5e4384e1-logs\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.611041 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-sb\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.611091 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e71f22a-250c-48e2-8309-7dfeb1325a2b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.611576 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd5aac42-012b-4b8f-8784-b62b5e4384e1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.617929 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-swift-storage-0\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.618415 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5aac42-012b-4b8f-8784-b62b5e4384e1-logs\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.619879 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-svc\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.619936 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-config\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.623835 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-nb\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.627411 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-sb\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.633785 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.636737 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.637814 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-scripts\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.643962 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5wh\" (UniqueName: \"kubernetes.io/projected/bd5aac42-012b-4b8f-8784-b62b5e4384e1-kube-api-access-cv5wh\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.647750 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.668259 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfff\" (UniqueName: \"kubernetes.io/projected/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-kube-api-access-4tfff\") pod \"dnsmasq-dns-5d546d59d7-9lr8f\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.701616 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.715889 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.716117 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dc4944754-qz6dk" event={"ID":"91e5425c-df09-441e-99d7-43af068fc7b0","Type":"ContainerStarted","Data":"ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3"} Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.722104 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e71f22a-250c-48e2-8309-7dfeb1325a2b","Type":"ContainerDied","Data":"018c6af4407a3b910e72e4027bf79f2cc50b5b2fda9264ea2f437f7910b13f66"} Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.722193 4799 scope.go:117] "RemoveContainer" containerID="c634a1be584089049bd50768fd674977a52cd6d3f29581008e3014fcac6db1c0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.722372 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.752678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.854549 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bd85f5c47-gbtmk" event={"ID":"cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd","Type":"ContainerStarted","Data":"b8143902fd7f98679864d5ef1011c571550d78e7b3be98928543c81625ee1b76"} Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.856109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.885490 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.957109 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bd85f5c47-gbtmk" podStartSLOduration=6.957075272 podStartE2EDuration="6.957075272s" podCreationTimestamp="2026-02-16 12:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:15.924587899 +0000 UTC m=+1241.517603243" watchObservedRunningTime="2026-02-16 12:52:15.957075272 +0000 UTC m=+1241.550090606" Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.967060 4799 generic.go:334] "Generic (PLEG): container finished" podID="b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" containerID="5d556160c0816e8087afe0692634196b99b8877cfd6169baa3c2cf415125e8e4" exitCode=0 Feb 16 12:52:15 crc kubenswrapper[4799]: I0216 12:52:15.967188 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656c667499-5d7pf" event={"ID":"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203","Type":"ContainerDied","Data":"5d556160c0816e8087afe0692634196b99b8877cfd6169baa3c2cf415125e8e4"} Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.050078 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9bd018cf-77c0-4f89-a1b7-e821440b0fe1","Type":"ContainerStarted","Data":"7698ee2e8af100443325f25cc6a287eaa65f3e7bd42616253b443703e1a4de14"} Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.050456 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.075062 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.090511 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.102879 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.106617 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.122195 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.122368 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.126737 4799 scope.go:117] "RemoveContainer" containerID="cd5deb5fd3db077a1a851740fa75368f52abf00dac239f25d0939245a9dec90c" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.148607 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:16 crc kubenswrapper[4799]: E0216 12:52:16.169371 4799 mount_linux.go:282] Mount failed: exit status 32 Feb 16 12:52:16 crc kubenswrapper[4799]: Mounting command: mount Feb 16 12:52:16 crc kubenswrapper[4799]: Mounting arguments: --no-canonicalize -o bind /proc/4799/fd/31 /var/lib/kubelet/pods/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203/volume-subpaths/dns-svc/dnsmasq-dns/1 Feb 16 12:52:16 crc kubenswrapper[4799]: Output: mount: /var/lib/kubelet/pods/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwzp\" (UniqueName: \"kubernetes.io/projected/53f42733-a32b-4b85-b53d-842ffb840563-kube-api-access-6rwzp\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-log-httpd\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249164 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249247 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-config-data\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249328 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-scripts\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-run-httpd\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.249428 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: E0216 12:52:16.251911 4799 kubelet_pods.go:349] "Failed to prepare subPath for volumeMount of the container" err=< Feb 16 12:52:16 crc kubenswrapper[4799]: error mounting /var/lib/kubelet/pods/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203/volumes/kubernetes.io~configmap/dns-svc/..2026_02_16_12_52_12.969459188/dns-svc: mount failed: exit status 32 Feb 16 12:52:16 crc kubenswrapper[4799]: Mounting command: mount Feb 16 12:52:16 crc kubenswrapper[4799]: Mounting arguments: --no-canonicalize -o bind /proc/4799/fd/31 /var/lib/kubelet/pods/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203/volume-subpaths/dns-svc/dnsmasq-dns/1 Feb 16 12:52:16 crc kubenswrapper[4799]: Output: mount: /var/lib/kubelet/pods/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Feb 16 12:52:16 crc kubenswrapper[4799]: > containerName="dnsmasq-dns" volumeMountName="dns-svc" Feb 16 12:52:16 crc kubenswrapper[4799]: E0216 12:52:16.252081 4799 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:dnsmasq-dns,Image:38.102.83.119:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h68bh56hf9h5dbhch5c5h59chc7h9dh687h64ch649h58h5c4h58bhcdh8fh5fdh546h5ch5cbh699h5f6h78h558h5dfh56ch686h54bh5c8h599q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqb85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-656c667499-5d7pf_openstack(b91b90a6-5bc0-4fb2-9d24-1b1c9badd203): CreateContainerConfigError: failed to prepare subPath for volumeMount \"dns-svc\" of container \"dnsmasq-dns\"" logger="UnhandledError" Feb 16 12:52:16 crc kubenswrapper[4799]: E0216 12:52:16.253252 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerConfigError: \"failed to prepare subPath for volumeMount \\\"dns-svc\\\" of container \\\"dnsmasq-dns\\\"\"" pod="openstack/dnsmasq-dns-656c667499-5d7pf" podUID="b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.351014 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwzp\" (UniqueName: \"kubernetes.io/projected/53f42733-a32b-4b85-b53d-842ffb840563-kube-api-access-6rwzp\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.351942 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-log-httpd\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.352017 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.352085 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-config-data\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.352344 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-scripts\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.352492 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-run-httpd\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.352529 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: E0216 12:52:16.395281 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e71f22a_250c_48e2_8309_7dfeb1325a2b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e71f22a_250c_48e2_8309_7dfeb1325a2b.slice/crio-018c6af4407a3b910e72e4027bf79f2cc50b5b2fda9264ea2f437f7910b13f66\": RecentStats: unable to find data in memory cache]" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.430644 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-log-httpd\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.431535 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-run-httpd\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.438611 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-scripts\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.438702 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.443783 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwzp\" (UniqueName: \"kubernetes.io/projected/53f42733-a32b-4b85-b53d-842ffb840563-kube-api-access-6rwzp\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.445981 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-config-data\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.449267 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.587207 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.727551 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.782263 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:16 crc kubenswrapper[4799]: W0216 12:52:16.797364 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5aac42_012b_4b8f_8784_b62b5e4384e1.slice/crio-3a365bf286e2410e338b7fa83ee7f1807887f6a564f617ba3e9180962ede27cf WatchSource:0}: Error finding container 3a365bf286e2410e338b7fa83ee7f1807887f6a564f617ba3e9180962ede27cf: Status 404 returned error can't find the container with id 3a365bf286e2410e338b7fa83ee7f1807887f6a564f617ba3e9180962ede27cf Feb 16 12:52:16 crc kubenswrapper[4799]: I0216 12:52:16.802111 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d546d59d7-9lr8f"] Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.215688 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e71f22a-250c-48e2-8309-7dfeb1325a2b" path="/var/lib/kubelet/pods/3e71f22a-250c-48e2-8309-7dfeb1325a2b/volumes" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.226244 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.226278 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.226310 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4254e62b-6303-4b05-8d67-9b9090d9d757","Type":"ContainerStarted","Data":"700c3bd6941a5140d377cf0bb0f0c73182c3e7003ed0967ccbb96aedb7e28024"} Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.226336 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dc4944754-qz6dk" event={"ID":"91e5425c-df09-441e-99d7-43af068fc7b0","Type":"ContainerStarted","Data":"d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f"} Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.236771 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" event={"ID":"86fd8d1c-0696-41e9-a6f9-53efb050f0ce","Type":"ContainerStarted","Data":"7b8b631582cb45509ac8d1eb578b210dfd0294deb71d48c98618eb02ae68347b"} Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.260791 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd5aac42-012b-4b8f-8784-b62b5e4384e1","Type":"ContainerStarted","Data":"3a365bf286e2410e338b7fa83ee7f1807887f6a564f617ba3e9180962ede27cf"} Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.273952 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5dc4944754-qz6dk" podStartSLOduration=5.273928663 podStartE2EDuration="5.273928663s" podCreationTimestamp="2026-02-16 12:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:17.249180592 +0000 UTC m=+1242.842195926" watchObservedRunningTime="2026-02-16 12:52:17.273928663 +0000 UTC m=+1242.866943997" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.276447 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.399721 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.434551 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.723640 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.821558 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-svc\") pod \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.821929 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-sb\") pod \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.822075 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-config\") pod \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.822131 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-swift-storage-0\") pod \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.822435 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-nb\") pod \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.822462 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqb85\" (UniqueName: \"kubernetes.io/projected/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-kube-api-access-sqb85\") pod \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\" (UID: \"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203\") " Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.835339 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-kube-api-access-sqb85" (OuterVolumeSpecName: "kube-api-access-sqb85") pod "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" (UID: "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203"). InnerVolumeSpecName "kube-api-access-sqb85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.860891 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" (UID: "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.865253 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" (UID: "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.866765 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" (UID: "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.882646 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" (UID: "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.904195 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-config" (OuterVolumeSpecName: "config") pod "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" (UID: "b91b90a6-5bc0-4fb2-9d24-1b1c9badd203"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.933136 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.933202 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqb85\" (UniqueName: \"kubernetes.io/projected/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-kube-api-access-sqb85\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.933219 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.933231 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.933242 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:17 crc kubenswrapper[4799]: I0216 12:52:17.933253 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.024796 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.171:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.171:8443: connect: connection refused" Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.284314 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-656c667499-5d7pf" event={"ID":"b91b90a6-5bc0-4fb2-9d24-1b1c9badd203","Type":"ContainerDied","Data":"3b513fa75d472e67c0a8fc7c61b72d0b5360ace2b4b3c57570bf0223cc2b25ec"} Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.284373 4799 scope.go:117] "RemoveContainer" containerID="5d556160c0816e8087afe0692634196b99b8877cfd6169baa3c2cf415125e8e4" Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.284369 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-656c667499-5d7pf" Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.286157 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9bd018cf-77c0-4f89-a1b7-e821440b0fe1","Type":"ContainerStarted","Data":"17e7034c99fa8a210d7ef6c3ff4b2580f3be63bece6e12df49c2bd7a62de0ccd"} Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.287975 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerStarted","Data":"6df5b815b5c00219640bb4559ce54419df0662b68003f78e1261ed8542ed6c9d"} Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.357183 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-656c667499-5d7pf"] Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.368968 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-656c667499-5d7pf"] Feb 16 12:52:18 crc kubenswrapper[4799]: I0216 12:52:18.458363 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.162736 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" path="/var/lib/kubelet/pods/b91b90a6-5bc0-4fb2-9d24-1b1c9badd203/volumes" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.783544 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cdd7b58f8-6bxrn"] Feb 16 12:52:19 crc kubenswrapper[4799]: E0216 12:52:19.784198 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" containerName="init" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.784220 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" containerName="init" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.784477 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91b90a6-5bc0-4fb2-9d24-1b1c9badd203" containerName="init" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.787354 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.797760 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.798004 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.804451 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cdd7b58f8-6bxrn"] Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887207 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-config-data-custom\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887503 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6z5r\" (UniqueName: \"kubernetes.io/projected/b2510448-629c-43df-9492-a07c96a8b5f0-kube-api-access-j6z5r\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887621 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-public-tls-certs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887718 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-combined-ca-bundle\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887783 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-config-data\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887818 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2510448-629c-43df-9492-a07c96a8b5f0-logs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.887851 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-internal-tls-certs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2510448-629c-43df-9492-a07c96a8b5f0-logs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990331 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-internal-tls-certs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990385 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-config-data-custom\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990419 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6z5r\" (UniqueName: \"kubernetes.io/projected/b2510448-629c-43df-9492-a07c96a8b5f0-kube-api-access-j6z5r\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990449 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-public-tls-certs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990514 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-combined-ca-bundle\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990580 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-config-data\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.990704 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2510448-629c-43df-9492-a07c96a8b5f0-logs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.996392 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-config-data\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.996899 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-internal-tls-certs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:19 crc kubenswrapper[4799]: I0216 12:52:19.998332 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-config-data-custom\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.001066 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-public-tls-certs\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.001697 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2510448-629c-43df-9492-a07c96a8b5f0-combined-ca-bundle\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.011903 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6z5r\" (UniqueName: \"kubernetes.io/projected/b2510448-629c-43df-9492-a07c96a8b5f0-kube-api-access-j6z5r\") pod \"barbican-api-7cdd7b58f8-6bxrn\" (UID: \"b2510448-629c-43df-9492-a07c96a8b5f0\") " pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.108573 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.310877 4799 generic.go:334] "Generic (PLEG): container finished" podID="89824920-bcd3-4640-b27b-68554fad00bb" containerID="2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1" exitCode=1 Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.310962 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerDied","Data":"2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1"} Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.311025 4799 scope.go:117] "RemoveContainer" containerID="1bb4529c89b693c7cc7a09aad42d8ee033c49094a16b88a5d9b2c83932b8094b" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.312574 4799 scope.go:117] "RemoveContainer" containerID="2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1" Feb 16 12:52:20 crc kubenswrapper[4799]: E0216 12:52:20.312886 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(89824920-bcd3-4640-b27b-68554fad00bb)\"" pod="openstack/watcher-decision-engine-0" podUID="89824920-bcd3-4640-b27b-68554fad00bb" Feb 16 12:52:20 crc kubenswrapper[4799]: I0216 12:52:20.356240 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.356199619 podStartE2EDuration="6.356199619s" podCreationTimestamp="2026-02-16 12:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:20.334525176 +0000 UTC m=+1245.927540500" watchObservedRunningTime="2026-02-16 12:52:20.356199619 +0000 UTC m=+1245.949214953" Feb 16 12:52:21 crc kubenswrapper[4799]: I0216 12:52:21.382427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" event={"ID":"86fd8d1c-0696-41e9-a6f9-53efb050f0ce","Type":"ContainerStarted","Data":"3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782"} Feb 16 12:52:21 crc kubenswrapper[4799]: I0216 12:52:21.399407 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd5aac42-012b-4b8f-8784-b62b5e4384e1","Type":"ContainerStarted","Data":"3ae99a45966c2174ae37e8d501b745b1aa82d0fb8c9eb8c6a8ebb708824456ec"} Feb 16 12:52:21 crc kubenswrapper[4799]: I0216 12:52:21.793439 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:52:21 crc kubenswrapper[4799]: I0216 12:52:21.793956 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:52:22 crc kubenswrapper[4799]: I0216 12:52:22.411918 4799 generic.go:334] "Generic (PLEG): container finished" podID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerID="3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782" exitCode=0 Feb 16 12:52:22 crc kubenswrapper[4799]: I0216 12:52:22.411962 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" event={"ID":"86fd8d1c-0696-41e9-a6f9-53efb050f0ce","Type":"ContainerDied","Data":"3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782"} Feb 16 12:52:22 crc kubenswrapper[4799]: I0216 12:52:22.996060 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cdd7b58f8-6bxrn"] Feb 16 12:52:23 crc kubenswrapper[4799]: W0216 12:52:23.201703 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2510448_629c_43df_9492_a07c96a8b5f0.slice/crio-380459fd9bbe08bdc911cc72b9e0e046fad0c4452b4609ee574f51b4916225a9 WatchSource:0}: Error finding container 380459fd9bbe08bdc911cc72b9e0e046fad0c4452b4609ee574f51b4916225a9: Status 404 returned error can't find the container with id 380459fd9bbe08bdc911cc72b9e0e046fad0c4452b4609ee574f51b4916225a9 Feb 16 12:52:23 crc kubenswrapper[4799]: I0216 12:52:23.426976 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" event={"ID":"6cadefef-9278-4473-a8c8-97911ac9b269","Type":"ContainerStarted","Data":"932039cd3fa8b95cfc13524e9765fd6a76e8bd9421776fc8159d975f97efa0f1"} Feb 16 12:52:23 crc kubenswrapper[4799]: I0216 12:52:23.433076 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" event={"ID":"86fd8d1c-0696-41e9-a6f9-53efb050f0ce","Type":"ContainerStarted","Data":"33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f"} Feb 16 12:52:23 crc kubenswrapper[4799]: I0216 12:52:23.433348 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:23 crc kubenswrapper[4799]: I0216 12:52:23.439467 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" event={"ID":"99699fe4-f20c-42e0-9c4f-029b9ee24fdb","Type":"ContainerStarted","Data":"ee374a3b014409813b833b68a1d8dfeda9b2e4a19d1522b8bafe3cbb7762b852"} Feb 16 12:52:23 crc kubenswrapper[4799]: I0216 12:52:23.441466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" event={"ID":"b2510448-629c-43df-9492-a07c96a8b5f0","Type":"ContainerStarted","Data":"380459fd9bbe08bdc911cc72b9e0e046fad0c4452b4609ee574f51b4916225a9"} Feb 16 12:52:23 crc kubenswrapper[4799]: I0216 12:52:23.461083 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" podStartSLOduration=8.461050304 podStartE2EDuration="8.461050304s" podCreationTimestamp="2026-02-16 12:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:23.451301163 +0000 UTC m=+1249.044316497" watchObservedRunningTime="2026-02-16 12:52:23.461050304 +0000 UTC m=+1249.054065638" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.477309 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" event={"ID":"99699fe4-f20c-42e0-9c4f-029b9ee24fdb","Type":"ContainerStarted","Data":"bd57b22a935371a89d25dc2eb8a82a6ef9c8c9a418d30f1578976d47e22f6dd0"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.496311 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerStarted","Data":"93c4f9bcdd3b8aa3a53b4baddec229ace78858c76cbc43de1ca4ae9ba436fb08"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.522737 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd5aac42-012b-4b8f-8784-b62b5e4384e1","Type":"ContainerStarted","Data":"1db47fd713bd098282f66e0a64adecacaaf667f9e34d42f226cf5a5875ff6f45"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.522796 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api-log" containerID="cri-o://3ae99a45966c2174ae37e8d501b745b1aa82d0fb8c9eb8c6a8ebb708824456ec" gracePeriod=30 Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.522901 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api" containerID="cri-o://1db47fd713bd098282f66e0a64adecacaaf667f9e34d42f226cf5a5875ff6f45" gracePeriod=30 Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.523102 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.538543 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4254e62b-6303-4b05-8d67-9b9090d9d757","Type":"ContainerStarted","Data":"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.565058 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-56d6b7fd5c-s6xhs" podStartSLOduration=3.665216251 podStartE2EDuration="12.565036716s" podCreationTimestamp="2026-02-16 12:52:12 +0000 UTC" firstStartedPulling="2026-02-16 12:52:13.919429692 +0000 UTC m=+1239.512445026" lastFinishedPulling="2026-02-16 12:52:22.819250147 +0000 UTC m=+1248.412265491" observedRunningTime="2026-02-16 12:52:24.522648578 +0000 UTC m=+1250.115663912" watchObservedRunningTime="2026-02-16 12:52:24.565036716 +0000 UTC m=+1250.158052040" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.567516 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" event={"ID":"b2510448-629c-43df-9492-a07c96a8b5f0","Type":"ContainerStarted","Data":"c76b8d1120bcd0e34046bb46ea8506d43e9d131404d272267a8ca76c149231e3"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.567577 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" event={"ID":"b2510448-629c-43df-9492-a07c96a8b5f0","Type":"ContainerStarted","Data":"66e8b38c39437df337412c3c52df45d033d0bc39d3421a12566731a80faf5e58"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.568797 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.568837 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.582781 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" event={"ID":"6cadefef-9278-4473-a8c8-97911ac9b269","Type":"ContainerStarted","Data":"ce70dd7f9c4ad95cf1ecf130e467a826dcadeb6a2fbe00cb9e1120e40a33be50"} Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.604251 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.604219913 podStartE2EDuration="9.604219913s" podCreationTimestamp="2026-02-16 12:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:24.555584005 +0000 UTC m=+1250.148599359" watchObservedRunningTime="2026-02-16 12:52:24.604219913 +0000 UTC m=+1250.197235247" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.617861 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" podStartSLOduration=5.617837544 podStartE2EDuration="5.617837544s" podCreationTimestamp="2026-02-16 12:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:24.59958759 +0000 UTC m=+1250.192602924" watchObservedRunningTime="2026-02-16 12:52:24.617837544 +0000 UTC m=+1250.210852878" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.634245 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.634867 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.657513 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5584d58cd8-z4cwc" podStartSLOduration=4.738031909 podStartE2EDuration="12.657488204s" podCreationTimestamp="2026-02-16 12:52:12 +0000 UTC" firstStartedPulling="2026-02-16 12:52:14.24150398 +0000 UTC m=+1239.834519314" lastFinishedPulling="2026-02-16 12:52:22.160960265 +0000 UTC m=+1247.753975609" observedRunningTime="2026-02-16 12:52:24.642274627 +0000 UTC m=+1250.235289981" watchObservedRunningTime="2026-02-16 12:52:24.657488204 +0000 UTC m=+1250.250503538" Feb 16 12:52:24 crc kubenswrapper[4799]: I0216 12:52:24.685452 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.595340 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerStarted","Data":"adacb7f15ca185ed5bdf64f13c3526e805b3708d8ffeefb41148fef96f86bd18"} Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.598824 4799 generic.go:334] "Generic (PLEG): container finished" podID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerID="1db47fd713bd098282f66e0a64adecacaaf667f9e34d42f226cf5a5875ff6f45" exitCode=0 Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.598857 4799 generic.go:334] "Generic (PLEG): container finished" podID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerID="3ae99a45966c2174ae37e8d501b745b1aa82d0fb8c9eb8c6a8ebb708824456ec" exitCode=143 Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.598892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd5aac42-012b-4b8f-8784-b62b5e4384e1","Type":"ContainerDied","Data":"1db47fd713bd098282f66e0a64adecacaaf667f9e34d42f226cf5a5875ff6f45"} Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.598941 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd5aac42-012b-4b8f-8784-b62b5e4384e1","Type":"ContainerDied","Data":"3ae99a45966c2174ae37e8d501b745b1aa82d0fb8c9eb8c6a8ebb708824456ec"} Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.602640 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4254e62b-6303-4b05-8d67-9b9090d9d757","Type":"ContainerStarted","Data":"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8"} Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.633333 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.175727639 podStartE2EDuration="10.633307473s" podCreationTimestamp="2026-02-16 12:52:15 +0000 UTC" firstStartedPulling="2026-02-16 12:52:16.762794131 +0000 UTC m=+1242.355809465" lastFinishedPulling="2026-02-16 12:52:23.220373965 +0000 UTC m=+1248.813389299" observedRunningTime="2026-02-16 12:52:25.627680591 +0000 UTC m=+1251.220695925" watchObservedRunningTime="2026-02-16 12:52:25.633307473 +0000 UTC m=+1251.226322817" Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.697727 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.872024 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.872101 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.872113 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:25 crc kubenswrapper[4799]: I0216 12:52:25.872890 4799 scope.go:117] "RemoveContainer" containerID="2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1" Feb 16 12:52:25 crc kubenswrapper[4799]: E0216 12:52:25.873109 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(89824920-bcd3-4640-b27b-68554fad00bb)\"" pod="openstack/watcher-decision-engine-0" podUID="89824920-bcd3-4640-b27b-68554fad00bb" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.086920 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.193337 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.221016 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.340751 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data-custom\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.340961 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd5aac42-012b-4b8f-8784-b62b5e4384e1-etc-machine-id\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.341002 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-scripts\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.341022 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv5wh\" (UniqueName: \"kubernetes.io/projected/bd5aac42-012b-4b8f-8784-b62b5e4384e1-kube-api-access-cv5wh\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.341072 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-combined-ca-bundle\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.341183 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5aac42-012b-4b8f-8784-b62b5e4384e1-logs\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.341267 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data\") pod \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\" (UID: \"bd5aac42-012b-4b8f-8784-b62b5e4384e1\") " Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.346585 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5aac42-012b-4b8f-8784-b62b5e4384e1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.349568 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5aac42-012b-4b8f-8784-b62b5e4384e1-logs" (OuterVolumeSpecName: "logs") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.391326 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.391500 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5aac42-012b-4b8f-8784-b62b5e4384e1-kube-api-access-cv5wh" (OuterVolumeSpecName: "kube-api-access-cv5wh") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "kube-api-access-cv5wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.391579 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-scripts" (OuterVolumeSpecName: "scripts") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.446292 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.446335 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv5wh\" (UniqueName: \"kubernetes.io/projected/bd5aac42-012b-4b8f-8784-b62b5e4384e1-kube-api-access-cv5wh\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.446352 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5aac42-012b-4b8f-8784-b62b5e4384e1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.446366 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.446378 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd5aac42-012b-4b8f-8784-b62b5e4384e1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.483109 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.547741 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.629946 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.630569 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd5aac42-012b-4b8f-8784-b62b5e4384e1","Type":"ContainerDied","Data":"3a365bf286e2410e338b7fa83ee7f1807887f6a564f617ba3e9180962ede27cf"} Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.630613 4799 scope.go:117] "RemoveContainer" containerID="1db47fd713bd098282f66e0a64adecacaaf667f9e34d42f226cf5a5875ff6f45" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.635391 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data" (OuterVolumeSpecName: "config-data") pod "bd5aac42-012b-4b8f-8784-b62b5e4384e1" (UID: "bd5aac42-012b-4b8f-8784-b62b5e4384e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.650618 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5aac42-012b-4b8f-8784-b62b5e4384e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.810355 4799 scope.go:117] "RemoveContainer" containerID="3ae99a45966c2174ae37e8d501b745b1aa82d0fb8c9eb8c6a8ebb708824456ec" Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.970339 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:26 crc kubenswrapper[4799]: I0216 12:52:26.987831 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.002695 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:27 crc kubenswrapper[4799]: E0216 12:52:27.003222 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.003240 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api" Feb 16 12:52:27 crc kubenswrapper[4799]: E0216 12:52:27.004960 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api-log" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.005481 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api-log" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.005714 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api-log" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.005729 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" containerName="cinder-api" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.007932 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.011546 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.017310 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.017525 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.017757 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.121751 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-config-data\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.121837 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-scripts\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.121878 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c3718e-7e67-4586-8532-6883f43129bd-logs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.121919 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.121999 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15c3718e-7e67-4586-8532-6883f43129bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.122035 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.122071 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.122144 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4n6\" (UniqueName: \"kubernetes.io/projected/15c3718e-7e67-4586-8532-6883f43129bd-kube-api-access-6d4n6\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.122196 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.176117 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5aac42-012b-4b8f-8784-b62b5e4384e1" path="/var/lib/kubelet/pods/bd5aac42-012b-4b8f-8784-b62b5e4384e1/volumes" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225231 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15c3718e-7e67-4586-8532-6883f43129bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225295 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225446 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225495 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4n6\" (UniqueName: \"kubernetes.io/projected/15c3718e-7e67-4586-8532-6883f43129bd-kube-api-access-6d4n6\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225542 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225578 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-config-data\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225605 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-scripts\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225634 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c3718e-7e67-4586-8532-6883f43129bd-logs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225661 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.225769 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/15c3718e-7e67-4586-8532-6883f43129bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.227584 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c3718e-7e67-4586-8532-6883f43129bd-logs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.230982 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.231312 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-scripts\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.239963 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.244707 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.247839 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.267518 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4n6\" (UniqueName: \"kubernetes.io/projected/15c3718e-7e67-4586-8532-6883f43129bd-kube-api-access-6d4n6\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.268950 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c3718e-7e67-4586-8532-6883f43129bd-config-data\") pod \"cinder-api-0\" (UID: \"15c3718e-7e67-4586-8532-6883f43129bd\") " pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.338721 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.701769 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerStarted","Data":"0b9cd67e2a46975e53cd8d565a95ef99a5d4bd17215447b5c416fcf88beb621d"} Feb 16 12:52:27 crc kubenswrapper[4799]: I0216 12:52:27.734047 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 12:52:27 crc kubenswrapper[4799]: W0216 12:52:27.747955 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c3718e_7e67_4586_8532_6883f43129bd.slice/crio-fd9cc83da390f65b9ad53fc3b3412b9f7071008b43419afb0677c40f741f68c4 WatchSource:0}: Error finding container fd9cc83da390f65b9ad53fc3b3412b9f7071008b43419afb0677c40f741f68c4: Status 404 returned error can't find the container with id fd9cc83da390f65b9ad53fc3b3412b9f7071008b43419afb0677c40f741f68c4 Feb 16 12:52:28 crc kubenswrapper[4799]: I0216 12:52:28.024737 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6746fc7768-pc68r" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.171:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.171:8443: connect: connection refused" Feb 16 12:52:28 crc kubenswrapper[4799]: I0216 12:52:28.729640 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15c3718e-7e67-4586-8532-6883f43129bd","Type":"ContainerStarted","Data":"ecef1d735f4989479a44ab56fefe5bcd30323917aae1ff7c2251e5fcc3f06bed"} Feb 16 12:52:28 crc kubenswrapper[4799]: I0216 12:52:28.729686 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15c3718e-7e67-4586-8532-6883f43129bd","Type":"ContainerStarted","Data":"fd9cc83da390f65b9ad53fc3b3412b9f7071008b43419afb0677c40f741f68c4"} Feb 16 12:52:29 crc kubenswrapper[4799]: I0216 12:52:29.741702 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"15c3718e-7e67-4586-8532-6883f43129bd","Type":"ContainerStarted","Data":"234b3c9319b6e1bdcbc578127c011a092d90fbdd124de3408d9466bc79f7e8bf"} Feb 16 12:52:29 crc kubenswrapper[4799]: I0216 12:52:29.765863 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerStarted","Data":"b66fa3b4424d9a17c40c30c4ab123a2ba46f8e6a38d0759345198f3f2b92ec18"} Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.489806 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.496114 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.503012 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.509878 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f58d8f5db-4k8dn" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.513229 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.659689 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8cc8d798d-nqmvr"] Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.704463 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.772780 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74bd488478-wqpd6" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.781383 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.782000 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.801385 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c799447-vnxkx"] Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.801641 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68c799447-vnxkx" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerName="dnsmasq-dns" containerID="cri-o://f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592" gracePeriod=10 Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.839847 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.075582819 podStartE2EDuration="14.839829727s" podCreationTimestamp="2026-02-16 12:52:16 +0000 UTC" firstStartedPulling="2026-02-16 12:52:17.576057497 +0000 UTC m=+1243.169072831" lastFinishedPulling="2026-02-16 12:52:29.340304405 +0000 UTC m=+1254.933319739" observedRunningTime="2026-02-16 12:52:30.836649135 +0000 UTC m=+1256.429664489" watchObservedRunningTime="2026-02-16 12:52:30.839829727 +0000 UTC m=+1256.432845061" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.874999 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.874983157 podStartE2EDuration="4.874983157s" podCreationTimestamp="2026-02-16 12:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:30.867640986 +0000 UTC m=+1256.460656340" watchObservedRunningTime="2026-02-16 12:52:30.874983157 +0000 UTC m=+1256.467998491" Feb 16 12:52:30 crc kubenswrapper[4799]: I0216 12:52:30.911483 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.476454 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.505883 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-swift-storage-0\") pod \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.505961 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk2lr\" (UniqueName: \"kubernetes.io/projected/f2be7512-5841-4f22-bb5a-92c1f2beeceb-kube-api-access-nk2lr\") pod \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.506045 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-sb\") pod \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.506072 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-svc\") pod \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.506162 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-nb\") pod \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.506194 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-config\") pod \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\" (UID: \"f2be7512-5841-4f22-bb5a-92c1f2beeceb\") " Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.565425 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2be7512-5841-4f22-bb5a-92c1f2beeceb-kube-api-access-nk2lr" (OuterVolumeSpecName: "kube-api-access-nk2lr") pod "f2be7512-5841-4f22-bb5a-92c1f2beeceb" (UID: "f2be7512-5841-4f22-bb5a-92c1f2beeceb"). InnerVolumeSpecName "kube-api-access-nk2lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.613740 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk2lr\" (UniqueName: \"kubernetes.io/projected/f2be7512-5841-4f22-bb5a-92c1f2beeceb-kube-api-access-nk2lr\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.623753 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2be7512-5841-4f22-bb5a-92c1f2beeceb" (UID: "f2be7512-5841-4f22-bb5a-92c1f2beeceb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.660858 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2be7512-5841-4f22-bb5a-92c1f2beeceb" (UID: "f2be7512-5841-4f22-bb5a-92c1f2beeceb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.671818 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2be7512-5841-4f22-bb5a-92c1f2beeceb" (UID: "f2be7512-5841-4f22-bb5a-92c1f2beeceb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.710719 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2be7512-5841-4f22-bb5a-92c1f2beeceb" (UID: "f2be7512-5841-4f22-bb5a-92c1f2beeceb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.712004 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-config" (OuterVolumeSpecName: "config") pod "f2be7512-5841-4f22-bb5a-92c1f2beeceb" (UID: "f2be7512-5841-4f22-bb5a-92c1f2beeceb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.716004 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.716046 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.716062 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.716073 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.716084 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2be7512-5841-4f22-bb5a-92c1f2beeceb-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.795166 4799 generic.go:334] "Generic (PLEG): container finished" podID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerID="f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592" exitCode=0 Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.795280 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c799447-vnxkx" event={"ID":"f2be7512-5841-4f22-bb5a-92c1f2beeceb","Type":"ContainerDied","Data":"f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592"} Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.795350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c799447-vnxkx" event={"ID":"f2be7512-5841-4f22-bb5a-92c1f2beeceb","Type":"ContainerDied","Data":"16ef46d141321b97d7814fee56b63f122372e4065a6a56229fd54339b4d49961"} Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.795370 4799 scope.go:117] "RemoveContainer" containerID="f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.795544 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c799447-vnxkx" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.795710 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8cc8d798d-nqmvr" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-log" containerID="cri-o://af6ee9d45905c86611fe341baff2696cb44b3a6d5d3b655f2c263c4abedae3e1" gracePeriod=30 Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.796221 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8cc8d798d-nqmvr" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-api" containerID="cri-o://326df233a027b8bc25c00fcf9dc85645b22527b6b8c596439f9b3758f693fca1" gracePeriod=30 Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.844649 4799 scope.go:117] "RemoveContainer" containerID="77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.870409 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c799447-vnxkx"] Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.878891 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68c799447-vnxkx"] Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.889968 4799 scope.go:117] "RemoveContainer" containerID="f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592" Feb 16 12:52:31 crc kubenswrapper[4799]: E0216 12:52:31.890714 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592\": container with ID starting with f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592 not found: ID does not exist" containerID="f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.890760 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592"} err="failed to get container status \"f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592\": rpc error: code = NotFound desc = could not find container \"f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592\": container with ID starting with f92ff1c730508a4c3e37694b8f2202ec800e736f0b16e85df64254cf3fcbb592 not found: ID does not exist" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.890789 4799 scope.go:117] "RemoveContainer" containerID="77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b" Feb 16 12:52:31 crc kubenswrapper[4799]: E0216 12:52:31.891223 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b\": container with ID starting with 77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b not found: ID does not exist" containerID="77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b" Feb 16 12:52:31 crc kubenswrapper[4799]: I0216 12:52:31.891260 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b"} err="failed to get container status \"77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b\": rpc error: code = NotFound desc = could not find container \"77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b\": container with ID starting with 77cd0545d99e3100d2222e2f00c396e962bef034aa0d615582f698e32ba8e39b not found: ID does not exist" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.427036 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 12:52:32 crc kubenswrapper[4799]: E0216 12:52:32.427551 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerName="init" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.427569 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerName="init" Feb 16 12:52:32 crc kubenswrapper[4799]: E0216 12:52:32.427606 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerName="dnsmasq-dns" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.427614 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerName="dnsmasq-dns" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.428010 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" containerName="dnsmasq-dns" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.429064 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.433587 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.433626 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.433733 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-57gv9" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.438094 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.535698 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e024c88-16fc-4003-bc76-165ac4445e8f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.535766 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e024c88-16fc-4003-bc76-165ac4445e8f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.535917 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e024c88-16fc-4003-bc76-165ac4445e8f-openstack-config\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.536167 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9prj\" (UniqueName: \"kubernetes.io/projected/8e024c88-16fc-4003-bc76-165ac4445e8f-kube-api-access-x9prj\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.560331 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.638075 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e024c88-16fc-4003-bc76-165ac4445e8f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.638176 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e024c88-16fc-4003-bc76-165ac4445e8f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.638213 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e024c88-16fc-4003-bc76-165ac4445e8f-openstack-config\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.638304 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9prj\" (UniqueName: \"kubernetes.io/projected/8e024c88-16fc-4003-bc76-165ac4445e8f-kube-api-access-x9prj\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.639099 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e024c88-16fc-4003-bc76-165ac4445e8f-openstack-config\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.645194 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e024c88-16fc-4003-bc76-165ac4445e8f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.648783 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e024c88-16fc-4003-bc76-165ac4445e8f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.657655 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9prj\" (UniqueName: \"kubernetes.io/projected/8e024c88-16fc-4003-bc76-165ac4445e8f-kube-api-access-x9prj\") pod \"openstackclient\" (UID: \"8e024c88-16fc-4003-bc76-165ac4445e8f\") " pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.745973 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.840109 4799 generic.go:334] "Generic (PLEG): container finished" podID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerID="af6ee9d45905c86611fe341baff2696cb44b3a6d5d3b655f2c263c4abedae3e1" exitCode=143 Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.840170 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc8d798d-nqmvr" event={"ID":"75a3aef5-ff55-4650-93f7-93f79bd441ca","Type":"ContainerDied","Data":"af6ee9d45905c86611fe341baff2696cb44b3a6d5d3b655f2c263c4abedae3e1"} Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.868359 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cdd7b58f8-6bxrn" Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.965755 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5dc4944754-qz6dk"] Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.966028 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5dc4944754-qz6dk" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api-log" containerID="cri-o://ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3" gracePeriod=30 Feb 16 12:52:32 crc kubenswrapper[4799]: I0216 12:52:32.966186 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5dc4944754-qz6dk" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api" containerID="cri-o://d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f" gracePeriod=30 Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.166970 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2be7512-5841-4f22-bb5a-92c1f2beeceb" path="/var/lib/kubelet/pods/f2be7512-5841-4f22-bb5a-92c1f2beeceb/volumes" Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.290183 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.893708 4799 generic.go:334] "Generic (PLEG): container finished" podID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerID="326df233a027b8bc25c00fcf9dc85645b22527b6b8c596439f9b3758f693fca1" exitCode=0 Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.893761 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc8d798d-nqmvr" event={"ID":"75a3aef5-ff55-4650-93f7-93f79bd441ca","Type":"ContainerDied","Data":"326df233a027b8bc25c00fcf9dc85645b22527b6b8c596439f9b3758f693fca1"} Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.930211 4799 generic.go:334] "Generic (PLEG): container finished" podID="5357e09b-7a51-4687-be1c-99a473120c90" containerID="6a7d9541f9ee6c4936a4ca92c8e7cbe7f3befe853e369e78b7a6a37ba1b1f36a" exitCode=137 Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.930279 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746fc7768-pc68r" event={"ID":"5357e09b-7a51-4687-be1c-99a473120c90","Type":"ContainerDied","Data":"6a7d9541f9ee6c4936a4ca92c8e7cbe7f3befe853e369e78b7a6a37ba1b1f36a"} Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.955566 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.965930 4799 generic.go:334] "Generic (PLEG): container finished" podID="91e5425c-df09-441e-99d7-43af068fc7b0" containerID="ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3" exitCode=143 Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.966022 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dc4944754-qz6dk" event={"ID":"91e5425c-df09-441e-99d7-43af068fc7b0","Type":"ContainerDied","Data":"ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3"} Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.981258 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e024c88-16fc-4003-bc76-165ac4445e8f","Type":"ContainerStarted","Data":"b88ecf4c67c4a3fac9446e6ceaa8f7bf5020491db340e8e6b1f4bde341a59871"} Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992571 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-secret-key\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992612 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-combined-ca-bundle\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992666 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmv4f\" (UniqueName: \"kubernetes.io/projected/5357e09b-7a51-4687-be1c-99a473120c90-kube-api-access-tmv4f\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992731 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-config-data\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992854 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-scripts\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992910 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5357e09b-7a51-4687-be1c-99a473120c90-logs\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:33 crc kubenswrapper[4799]: I0216 12:52:33.992998 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-tls-certs\") pod \"5357e09b-7a51-4687-be1c-99a473120c90\" (UID: \"5357e09b-7a51-4687-be1c-99a473120c90\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.000421 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5357e09b-7a51-4687-be1c-99a473120c90-logs" (OuterVolumeSpecName: "logs") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.017368 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.029328 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5357e09b-7a51-4687-be1c-99a473120c90-kube-api-access-tmv4f" (OuterVolumeSpecName: "kube-api-access-tmv4f") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "kube-api-access-tmv4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.029908 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.089499 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-scripts" (OuterVolumeSpecName: "scripts") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.094860 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-scripts\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.095312 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-public-tls-certs\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.095440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-config-data\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.095525 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-combined-ca-bundle\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.095603 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a3aef5-ff55-4650-93f7-93f79bd441ca-logs\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.095704 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45g8w\" (UniqueName: \"kubernetes.io/projected/75a3aef5-ff55-4650-93f7-93f79bd441ca-kube-api-access-45g8w\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.095811 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-internal-tls-certs\") pod \"75a3aef5-ff55-4650-93f7-93f79bd441ca\" (UID: \"75a3aef5-ff55-4650-93f7-93f79bd441ca\") " Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.098139 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.098192 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5357e09b-7a51-4687-be1c-99a473120c90-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.098209 4799 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.098223 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmv4f\" (UniqueName: \"kubernetes.io/projected/5357e09b-7a51-4687-be1c-99a473120c90-kube-api-access-tmv4f\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.099169 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a3aef5-ff55-4650-93f7-93f79bd441ca-logs" (OuterVolumeSpecName: "logs") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.119957 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-config-data" (OuterVolumeSpecName: "config-data") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.123007 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.124917 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-scripts" (OuterVolumeSpecName: "scripts") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.126107 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a3aef5-ff55-4650-93f7-93f79bd441ca-kube-api-access-45g8w" (OuterVolumeSpecName: "kube-api-access-45g8w") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "kube-api-access-45g8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.183310 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "5357e09b-7a51-4687-be1c-99a473120c90" (UID: "5357e09b-7a51-4687-be1c-99a473120c90"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.200608 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.200644 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.200654 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5357e09b-7a51-4687-be1c-99a473120c90-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.200664 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a3aef5-ff55-4650-93f7-93f79bd441ca-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.200673 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45g8w\" (UniqueName: \"kubernetes.io/projected/75a3aef5-ff55-4650-93f7-93f79bd441ca-kube-api-access-45g8w\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.200684 4799 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5357e09b-7a51-4687-be1c-99a473120c90-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.233401 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-config-data" (OuterVolumeSpecName: "config-data") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.233778 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.303728 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.303786 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.399329 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.405729 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.432314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75a3aef5-ff55-4650-93f7-93f79bd441ca" (UID: "75a3aef5-ff55-4650-93f7-93f79bd441ca"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.510428 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a3aef5-ff55-4650-93f7-93f79bd441ca-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.997434 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cc8d798d-nqmvr" Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.998250 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc8d798d-nqmvr" event={"ID":"75a3aef5-ff55-4650-93f7-93f79bd441ca","Type":"ContainerDied","Data":"7f15e76afa9adbc362d3402cc00eb78077786b8a34e6c349593ec8ccddbacfe0"} Feb 16 12:52:34 crc kubenswrapper[4799]: I0216 12:52:34.998305 4799 scope.go:117] "RemoveContainer" containerID="326df233a027b8bc25c00fcf9dc85645b22527b6b8c596439f9b3758f693fca1" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.004184 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6746fc7768-pc68r" event={"ID":"5357e09b-7a51-4687-be1c-99a473120c90","Type":"ContainerDied","Data":"0c48f7120a195adfe4901873eed7a5533e82b99dbd648265975772d29a8c49a9"} Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.004261 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6746fc7768-pc68r" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.077841 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6746fc7768-pc68r"] Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.098152 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6746fc7768-pc68r"] Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.119496 4799 scope.go:117] "RemoveContainer" containerID="af6ee9d45905c86611fe341baff2696cb44b3a6d5d3b655f2c263c4abedae3e1" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.186855 4799 scope.go:117] "RemoveContainer" containerID="9e98b1e0776c21e798c8ec0399674b680e3006908cb7040c91104594062fa43f" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.301185 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5357e09b-7a51-4687-be1c-99a473120c90" path="/var/lib/kubelet/pods/5357e09b-7a51-4687-be1c-99a473120c90/volumes" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.301881 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8cc8d798d-nqmvr"] Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.301911 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8cc8d798d-nqmvr"] Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.410373 4799 scope.go:117] "RemoveContainer" containerID="6a7d9541f9ee6c4936a4ca92c8e7cbe7f3befe853e369e78b7a6a37ba1b1f36a" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.520454 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 12:52:35 crc kubenswrapper[4799]: I0216 12:52:35.566243 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:36 crc kubenswrapper[4799]: I0216 12:52:36.020475 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="cinder-scheduler" containerID="cri-o://5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc" gracePeriod=30 Feb 16 12:52:36 crc kubenswrapper[4799]: I0216 12:52:36.020745 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="probe" containerID="cri-o://c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8" gracePeriod=30 Feb 16 12:52:36 crc kubenswrapper[4799]: I0216 12:52:36.380665 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5dc4944754-qz6dk" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.189:9311/healthcheck\": read tcp 10.217.0.2:53968->10.217.0.189:9311: read: connection reset by peer" Feb 16 12:52:36 crc kubenswrapper[4799]: I0216 12:52:36.380696 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5dc4944754-qz6dk" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.189:9311/healthcheck\": read tcp 10.217.0.2:53974->10.217.0.189:9311: read: connection reset by peer" Feb 16 12:52:36 crc kubenswrapper[4799]: I0216 12:52:36.925786 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.054838 4799 generic.go:334] "Generic (PLEG): container finished" podID="91e5425c-df09-441e-99d7-43af068fc7b0" containerID="d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f" exitCode=0 Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.054885 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dc4944754-qz6dk" event={"ID":"91e5425c-df09-441e-99d7-43af068fc7b0","Type":"ContainerDied","Data":"d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f"} Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.054913 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dc4944754-qz6dk" event={"ID":"91e5425c-df09-441e-99d7-43af068fc7b0","Type":"ContainerDied","Data":"dfeeac113d9d7621f53249bc4a9f0a1ee4bb8a4cfee47775f601fb6a6ede23e4"} Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.054929 4799 scope.go:117] "RemoveContainer" containerID="d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.055057 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dc4944754-qz6dk" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.106681 4799 scope.go:117] "RemoveContainer" containerID="ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.119280 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data-custom\") pod \"91e5425c-df09-441e-99d7-43af068fc7b0\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.119345 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-combined-ca-bundle\") pod \"91e5425c-df09-441e-99d7-43af068fc7b0\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.119424 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data\") pod \"91e5425c-df09-441e-99d7-43af068fc7b0\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.119471 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zksmb\" (UniqueName: \"kubernetes.io/projected/91e5425c-df09-441e-99d7-43af068fc7b0-kube-api-access-zksmb\") pod \"91e5425c-df09-441e-99d7-43af068fc7b0\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.119518 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e5425c-df09-441e-99d7-43af068fc7b0-logs\") pod \"91e5425c-df09-441e-99d7-43af068fc7b0\" (UID: \"91e5425c-df09-441e-99d7-43af068fc7b0\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.120671 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91e5425c-df09-441e-99d7-43af068fc7b0-logs" (OuterVolumeSpecName: "logs") pod "91e5425c-df09-441e-99d7-43af068fc7b0" (UID: "91e5425c-df09-441e-99d7-43af068fc7b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.126927 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "91e5425c-df09-441e-99d7-43af068fc7b0" (UID: "91e5425c-df09-441e-99d7-43af068fc7b0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.140311 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e5425c-df09-441e-99d7-43af068fc7b0-kube-api-access-zksmb" (OuterVolumeSpecName: "kube-api-access-zksmb") pod "91e5425c-df09-441e-99d7-43af068fc7b0" (UID: "91e5425c-df09-441e-99d7-43af068fc7b0"). InnerVolumeSpecName "kube-api-access-zksmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.167262 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" path="/var/lib/kubelet/pods/75a3aef5-ff55-4650-93f7-93f79bd441ca/volumes" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.170176 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91e5425c-df09-441e-99d7-43af068fc7b0" (UID: "91e5425c-df09-441e-99d7-43af068fc7b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.172486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data" (OuterVolumeSpecName: "config-data") pod "91e5425c-df09-441e-99d7-43af068fc7b0" (UID: "91e5425c-df09-441e-99d7-43af068fc7b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.222550 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.222661 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.222700 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e5425c-df09-441e-99d7-43af068fc7b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.222716 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zksmb\" (UniqueName: \"kubernetes.io/projected/91e5425c-df09-441e-99d7-43af068fc7b0-kube-api-access-zksmb\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.222734 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e5425c-df09-441e-99d7-43af068fc7b0-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.288595 4799 scope.go:117] "RemoveContainer" containerID="d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f" Feb 16 12:52:37 crc kubenswrapper[4799]: E0216 12:52:37.289353 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f\": container with ID starting with d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f not found: ID does not exist" containerID="d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.289395 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f"} err="failed to get container status \"d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f\": rpc error: code = NotFound desc = could not find container \"d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f\": container with ID starting with d012068a3b07afeefb4cf93fd328e4fbc27b240fd45ee5bea0f867c461e5175f not found: ID does not exist" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.289424 4799 scope.go:117] "RemoveContainer" containerID="ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3" Feb 16 12:52:37 crc kubenswrapper[4799]: E0216 12:52:37.289838 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3\": container with ID starting with ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3 not found: ID does not exist" containerID="ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.289861 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3"} err="failed to get container status \"ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3\": rpc error: code = NotFound desc = could not find container \"ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3\": container with ID starting with ea9a91c32461dab61185d9479b91349dcfae15015d1d308f6fa2fe0f302eaab3 not found: ID does not exist" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.416222 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5dc4944754-qz6dk"] Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.458557 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5dc4944754-qz6dk"] Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.649524 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.852015 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-scripts\") pod \"4254e62b-6303-4b05-8d67-9b9090d9d757\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.852092 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data-custom\") pod \"4254e62b-6303-4b05-8d67-9b9090d9d757\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.852134 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4254e62b-6303-4b05-8d67-9b9090d9d757-etc-machine-id\") pod \"4254e62b-6303-4b05-8d67-9b9090d9d757\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.852252 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data\") pod \"4254e62b-6303-4b05-8d67-9b9090d9d757\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.852268 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp8rh\" (UniqueName: \"kubernetes.io/projected/4254e62b-6303-4b05-8d67-9b9090d9d757-kube-api-access-hp8rh\") pod \"4254e62b-6303-4b05-8d67-9b9090d9d757\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.852300 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-combined-ca-bundle\") pod \"4254e62b-6303-4b05-8d67-9b9090d9d757\" (UID: \"4254e62b-6303-4b05-8d67-9b9090d9d757\") " Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.854092 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4254e62b-6303-4b05-8d67-9b9090d9d757-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4254e62b-6303-4b05-8d67-9b9090d9d757" (UID: "4254e62b-6303-4b05-8d67-9b9090d9d757"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.857565 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-scripts" (OuterVolumeSpecName: "scripts") pod "4254e62b-6303-4b05-8d67-9b9090d9d757" (UID: "4254e62b-6303-4b05-8d67-9b9090d9d757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.858000 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4254e62b-6303-4b05-8d67-9b9090d9d757" (UID: "4254e62b-6303-4b05-8d67-9b9090d9d757"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.862630 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4254e62b-6303-4b05-8d67-9b9090d9d757-kube-api-access-hp8rh" (OuterVolumeSpecName: "kube-api-access-hp8rh") pod "4254e62b-6303-4b05-8d67-9b9090d9d757" (UID: "4254e62b-6303-4b05-8d67-9b9090d9d757"). InnerVolumeSpecName "kube-api-access-hp8rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.902706 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4254e62b-6303-4b05-8d67-9b9090d9d757" (UID: "4254e62b-6303-4b05-8d67-9b9090d9d757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.955012 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp8rh\" (UniqueName: \"kubernetes.io/projected/4254e62b-6303-4b05-8d67-9b9090d9d757-kube-api-access-hp8rh\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.955275 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.955358 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.955413 4799 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.955464 4799 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4254e62b-6303-4b05-8d67-9b9090d9d757-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:37 crc kubenswrapper[4799]: I0216 12:52:37.965767 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data" (OuterVolumeSpecName: "config-data") pod "4254e62b-6303-4b05-8d67-9b9090d9d757" (UID: "4254e62b-6303-4b05-8d67-9b9090d9d757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.056972 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4254e62b-6303-4b05-8d67-9b9090d9d757-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.068274 4799 generic.go:334] "Generic (PLEG): container finished" podID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerID="c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8" exitCode=0 Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.068317 4799 generic.go:334] "Generic (PLEG): container finished" podID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerID="5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc" exitCode=0 Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.068366 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.068706 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4254e62b-6303-4b05-8d67-9b9090d9d757","Type":"ContainerDied","Data":"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8"} Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.069380 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4254e62b-6303-4b05-8d67-9b9090d9d757","Type":"ContainerDied","Data":"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc"} Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.069530 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4254e62b-6303-4b05-8d67-9b9090d9d757","Type":"ContainerDied","Data":"700c3bd6941a5140d377cf0bb0f0c73182c3e7003ed0967ccbb96aedb7e28024"} Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.069493 4799 scope.go:117] "RemoveContainer" containerID="c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.105162 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.108065 4799 scope.go:117] "RemoveContainer" containerID="5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.114556 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.127530 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.128269 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.128336 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.128391 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon-log" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.128438 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon-log" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.128507 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="probe" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.128557 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="probe" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.128614 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api-log" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.128663 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api-log" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.128717 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-api" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.128764 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-api" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.128817 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.139114 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.139460 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-log" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.139526 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-log" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.139607 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="cinder-scheduler" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.139664 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="cinder-scheduler" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140013 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140090 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="cinder-scheduler" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140178 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5357e09b-7a51-4687-be1c-99a473120c90" containerName="horizon-log" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140249 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-api" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140310 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api-log" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140373 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a3aef5-ff55-4650-93f7-93f79bd441ca" containerName="placement-log" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140428 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" containerName="barbican-api" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.140481 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" containerName="probe" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.141475 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.142531 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.147371 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.150493 4799 scope.go:117] "RemoveContainer" containerID="2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.153359 4799 scope.go:117] "RemoveContainer" containerID="c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.157968 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8\": container with ID starting with c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8 not found: ID does not exist" containerID="c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.158413 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8"} err="failed to get container status \"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8\": rpc error: code = NotFound desc = could not find container \"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8\": container with ID starting with c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8 not found: ID does not exist" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.158529 4799 scope.go:117] "RemoveContainer" containerID="5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc" Feb 16 12:52:38 crc kubenswrapper[4799]: E0216 12:52:38.160758 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc\": container with ID starting with 5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc not found: ID does not exist" containerID="5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.163337 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc"} err="failed to get container status \"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc\": rpc error: code = NotFound desc = could not find container \"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc\": container with ID starting with 5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc not found: ID does not exist" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.163633 4799 scope.go:117] "RemoveContainer" containerID="c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.165725 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8"} err="failed to get container status \"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8\": rpc error: code = NotFound desc = could not find container \"c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8\": container with ID starting with c644798a451e82a430ec08c1cc2c8ccd50a305f439b5761df92ef80a4a6592d8 not found: ID does not exist" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.165886 4799 scope.go:117] "RemoveContainer" containerID="5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.166713 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc"} err="failed to get container status \"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc\": rpc error: code = NotFound desc = could not find container \"5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc\": container with ID starting with 5a6d51b48cacd021e7b89d2e4bb80cd64ef8d9137b4db1074e91114a63e1edbc not found: ID does not exist" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.269444 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.269543 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.269576 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.269599 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58gqr\" (UniqueName: \"kubernetes.io/projected/0404faed-9e4d-4374-83ef-13dc13839e7b-kube-api-access-58gqr\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.269688 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.269706 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0404faed-9e4d-4374-83ef-13dc13839e7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.372077 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.372487 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.372525 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58gqr\" (UniqueName: \"kubernetes.io/projected/0404faed-9e4d-4374-83ef-13dc13839e7b-kube-api-access-58gqr\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.372622 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.372649 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0404faed-9e4d-4374-83ef-13dc13839e7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.372751 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.375356 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0404faed-9e4d-4374-83ef-13dc13839e7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.376808 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.381261 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.383580 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.389966 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0404faed-9e4d-4374-83ef-13dc13839e7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.390936 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58gqr\" (UniqueName: \"kubernetes.io/projected/0404faed-9e4d-4374-83ef-13dc13839e7b-kube-api-access-58gqr\") pod \"cinder-scheduler-0\" (UID: \"0404faed-9e4d-4374-83ef-13dc13839e7b\") " pod="openstack/cinder-scheduler-0" Feb 16 12:52:38 crc kubenswrapper[4799]: I0216 12:52:38.468910 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.087694 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerStarted","Data":"7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02"} Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.159984 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4254e62b-6303-4b05-8d67-9b9090d9d757" path="/var/lib/kubelet/pods/4254e62b-6303-4b05-8d67-9b9090d9d757/volumes" Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.160885 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e5425c-df09-441e-99d7-43af068fc7b0" path="/var/lib/kubelet/pods/91e5425c-df09-441e-99d7-43af068fc7b0/volumes" Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.175942 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.690815 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bd85f5c47-gbtmk" Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.758734 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c88d8b85b-zrggw"] Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.759310 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c88d8b85b-zrggw" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-api" containerID="cri-o://ab3d851bd648412916a9a4a939adaeb99fa7d4f4478a2a335f301c014dacb378" gracePeriod=30 Feb 16 12:52:39 crc kubenswrapper[4799]: I0216 12:52:39.761967 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c88d8b85b-zrggw" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-httpd" containerID="cri-o://25aa58840310b30ac173168ef70859089686d547aa93dddc55acca132b627fdd" gracePeriod=30 Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.114173 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0404faed-9e4d-4374-83ef-13dc13839e7b","Type":"ContainerStarted","Data":"a45be3479c249e03c0f35c21766f931b4cd2566169d690eb8019670f3077de80"} Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.114481 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0404faed-9e4d-4374-83ef-13dc13839e7b","Type":"ContainerStarted","Data":"260f06b73a2ecb7af5ae7fd987b7901c896513baab6c8ab16830fea62d8c150d"} Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.528075 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7f54946f5f-2jrb5"] Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.560088 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.572855 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.573357 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.573586 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.587232 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f54946f5f-2jrb5"] Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.651268 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-internal-tls-certs\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.651343 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/441c04e7-2794-48cf-bc03-4c13536d22c4-etc-swift\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.651388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-config-data\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.652090 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xmpp\" (UniqueName: \"kubernetes.io/projected/441c04e7-2794-48cf-bc03-4c13536d22c4-kube-api-access-5xmpp\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.652228 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-public-tls-certs\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.652265 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-combined-ca-bundle\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.652589 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441c04e7-2794-48cf-bc03-4c13536d22c4-run-httpd\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.652709 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441c04e7-2794-48cf-bc03-4c13536d22c4-log-httpd\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754639 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-internal-tls-certs\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754692 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/441c04e7-2794-48cf-bc03-4c13536d22c4-etc-swift\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754760 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-config-data\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754834 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xmpp\" (UniqueName: \"kubernetes.io/projected/441c04e7-2794-48cf-bc03-4c13536d22c4-kube-api-access-5xmpp\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754874 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-public-tls-certs\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754895 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-combined-ca-bundle\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754936 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441c04e7-2794-48cf-bc03-4c13536d22c4-run-httpd\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.754980 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441c04e7-2794-48cf-bc03-4c13536d22c4-log-httpd\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.755689 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441c04e7-2794-48cf-bc03-4c13536d22c4-log-httpd\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.771401 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-internal-tls-certs\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.776097 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-combined-ca-bundle\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.776940 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441c04e7-2794-48cf-bc03-4c13536d22c4-run-httpd\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.785519 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/441c04e7-2794-48cf-bc03-4c13536d22c4-etc-swift\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.786043 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-config-data\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.786493 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441c04e7-2794-48cf-bc03-4c13536d22c4-public-tls-certs\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.792611 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xmpp\" (UniqueName: \"kubernetes.io/projected/441c04e7-2794-48cf-bc03-4c13536d22c4-kube-api-access-5xmpp\") pod \"swift-proxy-7f54946f5f-2jrb5\" (UID: \"441c04e7-2794-48cf-bc03-4c13536d22c4\") " pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.879732 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.879966 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-log" containerID="cri-o://25ddfa785289c44caca61f786064db6c542d2dd00358bf5705b0c8b0c69f1f32" gracePeriod=30 Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.880401 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-httpd" containerID="cri-o://cfa9d73604d9b0f5a0fd775c76ce5af06e86b6ad4bf12c502df9baf9e3f07850" gracePeriod=30 Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.889298 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": EOF" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.889405 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": EOF" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.889871 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9292/healthcheck\": EOF" Feb 16 12:52:40 crc kubenswrapper[4799]: I0216 12:52:40.927877 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:41 crc kubenswrapper[4799]: I0216 12:52:41.191781 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 12:52:41 crc kubenswrapper[4799]: I0216 12:52:41.207309 4799 generic.go:334] "Generic (PLEG): container finished" podID="02e06f59-2164-4486-9138-2819bf6dcf26" containerID="25aa58840310b30ac173168ef70859089686d547aa93dddc55acca132b627fdd" exitCode=0 Feb 16 12:52:41 crc kubenswrapper[4799]: I0216 12:52:41.207682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c88d8b85b-zrggw" event={"ID":"02e06f59-2164-4486-9138-2819bf6dcf26","Type":"ContainerDied","Data":"25aa58840310b30ac173168ef70859089686d547aa93dddc55acca132b627fdd"} Feb 16 12:52:41 crc kubenswrapper[4799]: I0216 12:52:41.239647 4799 generic.go:334] "Generic (PLEG): container finished" podID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerID="25ddfa785289c44caca61f786064db6c542d2dd00358bf5705b0c8b0c69f1f32" exitCode=143 Feb 16 12:52:41 crc kubenswrapper[4799]: I0216 12:52:41.239706 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e8b5246-e2d5-4349-aa8c-d58091276c4b","Type":"ContainerDied","Data":"25ddfa785289c44caca61f786064db6c542d2dd00358bf5705b0c8b0c69f1f32"} Feb 16 12:52:41 crc kubenswrapper[4799]: I0216 12:52:41.791918 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f54946f5f-2jrb5"] Feb 16 12:52:42 crc kubenswrapper[4799]: I0216 12:52:42.259940 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-94c85d75f-kbj7j" podUID="a74ff520-0a2d-4853-9070-fdf3f2aa7a47" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.179:9696/\": dial tcp 10.217.0.179:9696: i/o timeout" Feb 16 12:52:42 crc kubenswrapper[4799]: I0216 12:52:42.303808 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f54946f5f-2jrb5" event={"ID":"441c04e7-2794-48cf-bc03-4c13536d22c4","Type":"ContainerStarted","Data":"f76e061deabec1d994f693bba901694b84454246599a6fabef618f446d9ba5d2"} Feb 16 12:52:42 crc kubenswrapper[4799]: I0216 12:52:42.303861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f54946f5f-2jrb5" event={"ID":"441c04e7-2794-48cf-bc03-4c13536d22c4","Type":"ContainerStarted","Data":"ab0e4eacd1f103c8902308773351aacf0f67672c37ea811e5d7ee26d1bab7fb2"} Feb 16 12:52:42 crc kubenswrapper[4799]: I0216 12:52:42.306660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0404faed-9e4d-4374-83ef-13dc13839e7b","Type":"ContainerStarted","Data":"826ca6e8eedecf3099154df5ee052b8bed13fa1a6041c61dd353d8aca67c3b1b"} Feb 16 12:52:42 crc kubenswrapper[4799]: I0216 12:52:42.343099 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.343074193 podStartE2EDuration="4.343074193s" podCreationTimestamp="2026-02-16 12:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:42.335364551 +0000 UTC m=+1267.928379885" watchObservedRunningTime="2026-02-16 12:52:42.343074193 +0000 UTC m=+1267.936089527" Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.023986 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.025347 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-central-agent" containerID="cri-o://93c4f9bcdd3b8aa3a53b4baddec229ace78858c76cbc43de1ca4ae9ba436fb08" gracePeriod=30 Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.026110 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="proxy-httpd" containerID="cri-o://b66fa3b4424d9a17c40c30c4ab123a2ba46f8e6a38d0759345198f3f2b92ec18" gracePeriod=30 Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.026186 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="sg-core" containerID="cri-o://0b9cd67e2a46975e53cd8d565a95ef99a5d4bd17215447b5c416fcf88beb621d" gracePeriod=30 Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.026220 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-notification-agent" containerID="cri-o://adacb7f15ca185ed5bdf64f13c3526e805b3708d8ffeefb41148fef96f86bd18" gracePeriod=30 Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.041904 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.340184 4799 generic.go:334] "Generic (PLEG): container finished" podID="53f42733-a32b-4b85-b53d-842ffb840563" containerID="0b9cd67e2a46975e53cd8d565a95ef99a5d4bd17215447b5c416fcf88beb621d" exitCode=2 Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.340247 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerDied","Data":"0b9cd67e2a46975e53cd8d565a95ef99a5d4bd17215447b5c416fcf88beb621d"} Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.342692 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f54946f5f-2jrb5" event={"ID":"441c04e7-2794-48cf-bc03-4c13536d22c4","Type":"ContainerStarted","Data":"7968600fedc8f51f626fa6b6412bbcf53c66e0ed4a6960c1336dc351153b11a7"} Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.343099 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.369743 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7f54946f5f-2jrb5" podStartSLOduration=3.369723403 podStartE2EDuration="3.369723403s" podCreationTimestamp="2026-02-16 12:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:43.367110748 +0000 UTC m=+1268.960126082" watchObservedRunningTime="2026-02-16 12:52:43.369723403 +0000 UTC m=+1268.962738737" Feb 16 12:52:43 crc kubenswrapper[4799]: I0216 12:52:43.470109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.371285 4799 generic.go:334] "Generic (PLEG): container finished" podID="53f42733-a32b-4b85-b53d-842ffb840563" containerID="b66fa3b4424d9a17c40c30c4ab123a2ba46f8e6a38d0759345198f3f2b92ec18" exitCode=0 Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.372268 4799 generic.go:334] "Generic (PLEG): container finished" podID="53f42733-a32b-4b85-b53d-842ffb840563" containerID="93c4f9bcdd3b8aa3a53b4baddec229ace78858c76cbc43de1ca4ae9ba436fb08" exitCode=0 Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.371331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerDied","Data":"b66fa3b4424d9a17c40c30c4ab123a2ba46f8e6a38d0759345198f3f2b92ec18"} Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.372466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerDied","Data":"93c4f9bcdd3b8aa3a53b4baddec229ace78858c76cbc43de1ca4ae9ba436fb08"} Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.377420 4799 generic.go:334] "Generic (PLEG): container finished" podID="02e06f59-2164-4486-9138-2819bf6dcf26" containerID="ab3d851bd648412916a9a4a939adaeb99fa7d4f4478a2a335f301c014dacb378" exitCode=0 Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.377521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c88d8b85b-zrggw" event={"ID":"02e06f59-2164-4486-9138-2819bf6dcf26","Type":"ContainerDied","Data":"ab3d851bd648412916a9a4a939adaeb99fa7d4f4478a2a335f301c014dacb378"} Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.380326 4799 generic.go:334] "Generic (PLEG): container finished" podID="89824920-bcd3-4640-b27b-68554fad00bb" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" exitCode=1 Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.380354 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerDied","Data":"7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02"} Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.380406 4799 scope.go:117] "RemoveContainer" containerID="2f75000c46b704e065d5568d32cd3a97a46f55a9beb052030d6213d1e3601bd1" Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.380554 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:44 crc kubenswrapper[4799]: I0216 12:52:44.383983 4799 scope.go:117] "RemoveContainer" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:52:44 crc kubenswrapper[4799]: E0216 12:52:44.384710 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(89824920-bcd3-4640-b27b-68554fad00bb)\"" pod="openstack/watcher-decision-engine-0" podUID="89824920-bcd3-4640-b27b-68554fad00bb" Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.140444 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.141164 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-log" containerID="cri-o://f4d2c422762feab79fce7666498ee40316aba4ac7c11a6a06d7feeb182c32f42" gracePeriod=30 Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.141303 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-httpd" containerID="cri-o://bb7dd9ef23c9a03af1905e08d3a8471c8d1354dbdf48d6ce0fd51be004dc695e" gracePeriod=30 Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.413865 4799 generic.go:334] "Generic (PLEG): container finished" podID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerID="cfa9d73604d9b0f5a0fd775c76ce5af06e86b6ad4bf12c502df9baf9e3f07850" exitCode=0 Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.413992 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e8b5246-e2d5-4349-aa8c-d58091276c4b","Type":"ContainerDied","Data":"cfa9d73604d9b0f5a0fd775c76ce5af06e86b6ad4bf12c502df9baf9e3f07850"} Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.419607 4799 generic.go:334] "Generic (PLEG): container finished" podID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerID="f4d2c422762feab79fce7666498ee40316aba4ac7c11a6a06d7feeb182c32f42" exitCode=143 Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.420192 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94da3e05-8956-4ab9-b272-46b6afcf14d3","Type":"ContainerDied","Data":"f4d2c422762feab79fce7666498ee40316aba4ac7c11a6a06d7feeb182c32f42"} Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.872426 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.872556 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:45 crc kubenswrapper[4799]: I0216 12:52:45.873762 4799 scope.go:117] "RemoveContainer" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:52:45 crc kubenswrapper[4799]: E0216 12:52:45.874157 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(89824920-bcd3-4640-b27b-68554fad00bb)\"" pod="openstack/watcher-decision-engine-0" podUID="89824920-bcd3-4640-b27b-68554fad00bb" Feb 16 12:52:46 crc kubenswrapper[4799]: I0216 12:52:46.731112 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.194:3000/\": dial tcp 10.217.0.194:3000: connect: connection refused" Feb 16 12:52:47 crc kubenswrapper[4799]: I0216 12:52:47.459255 4799 generic.go:334] "Generic (PLEG): container finished" podID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerID="bb7dd9ef23c9a03af1905e08d3a8471c8d1354dbdf48d6ce0fd51be004dc695e" exitCode=0 Feb 16 12:52:47 crc kubenswrapper[4799]: I0216 12:52:47.459340 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94da3e05-8956-4ab9-b272-46b6afcf14d3","Type":"ContainerDied","Data":"bb7dd9ef23c9a03af1905e08d3a8471c8d1354dbdf48d6ce0fd51be004dc695e"} Feb 16 12:52:47 crc kubenswrapper[4799]: I0216 12:52:47.463973 4799 generic.go:334] "Generic (PLEG): container finished" podID="53f42733-a32b-4b85-b53d-842ffb840563" containerID="adacb7f15ca185ed5bdf64f13c3526e805b3708d8ffeefb41148fef96f86bd18" exitCode=0 Feb 16 12:52:47 crc kubenswrapper[4799]: I0216 12:52:47.464025 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerDied","Data":"adacb7f15ca185ed5bdf64f13c3526e805b3708d8ffeefb41148fef96f86bd18"} Feb 16 12:52:48 crc kubenswrapper[4799]: I0216 12:52:48.668813 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 12:52:50 crc kubenswrapper[4799]: I0216 12:52:50.824599 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:50 crc kubenswrapper[4799]: I0216 12:52:50.939027 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:50 crc kubenswrapper[4799]: I0216 12:52:50.945284 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f54946f5f-2jrb5" Feb 16 12:52:50 crc kubenswrapper[4799]: I0216 12:52:50.981640 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.025499 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-log-httpd\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.025610 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-run-httpd\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.025664 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rwzp\" (UniqueName: \"kubernetes.io/projected/53f42733-a32b-4b85-b53d-842ffb840563-kube-api-access-6rwzp\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.025691 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-scripts\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026322 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026517 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-scripts\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026551 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-config-data\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026575 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-combined-ca-bundle\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026594 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-config-data\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026614 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89ddz\" (UniqueName: \"kubernetes.io/projected/94da3e05-8956-4ab9-b272-46b6afcf14d3-kube-api-access-89ddz\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.026665 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-sg-core-conf-yaml\") pod \"53f42733-a32b-4b85-b53d-842ffb840563\" (UID: \"53f42733-a32b-4b85-b53d-842ffb840563\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.027450 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-logs\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.027872 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.029978 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.031117 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-logs" (OuterVolumeSpecName: "logs") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.044059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-scripts" (OuterVolumeSpecName: "scripts") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.045284 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-scripts" (OuterVolumeSpecName: "scripts") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.045423 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94da3e05-8956-4ab9-b272-46b6afcf14d3-kube-api-access-89ddz" (OuterVolumeSpecName: "kube-api-access-89ddz") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "kube-api-access-89ddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.048215 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f42733-a32b-4b85-b53d-842ffb840563-kube-api-access-6rwzp" (OuterVolumeSpecName: "kube-api-access-6rwzp") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "kube-api-access-6rwzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.088461 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.126252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-config-data" (OuterVolumeSpecName: "config-data") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.129333 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-public-tls-certs\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.129425 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-httpd-run\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.129681 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.129714 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-combined-ca-bundle\") pod \"94da3e05-8956-4ab9-b272-46b6afcf14d3\" (UID: \"94da3e05-8956-4ab9-b272-46b6afcf14d3\") " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130279 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53f42733-a32b-4b85-b53d-842ffb840563-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130304 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rwzp\" (UniqueName: \"kubernetes.io/projected/53f42733-a32b-4b85-b53d-842ffb840563-kube-api-access-6rwzp\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130320 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130334 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130348 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130360 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89ddz\" (UniqueName: \"kubernetes.io/projected/94da3e05-8956-4ab9-b272-46b6afcf14d3-kube-api-access-89ddz\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130373 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130387 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.130426 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.136248 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.140056 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.157490 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.169409 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-config-data" (OuterVolumeSpecName: "config-data") pod "53f42733-a32b-4b85-b53d-842ffb840563" (UID: "53f42733-a32b-4b85-b53d-842ffb840563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.201294 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94da3e05-8956-4ab9-b272-46b6afcf14d3" (UID: "94da3e05-8956-4ab9-b272-46b6afcf14d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.241850 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.241908 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94da3e05-8956-4ab9-b272-46b6afcf14d3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.241923 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.241941 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f42733-a32b-4b85-b53d-842ffb840563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.241978 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.241991 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da3e05-8956-4ab9-b272-46b6afcf14d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.283231 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.343589 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.513422 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53f42733-a32b-4b85-b53d-842ffb840563","Type":"ContainerDied","Data":"6df5b815b5c00219640bb4559ce54419df0662b68003f78e1261ed8542ed6c9d"} Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.513474 4799 scope.go:117] "RemoveContainer" containerID="b66fa3b4424d9a17c40c30c4ab123a2ba46f8e6a38d0759345198f3f2b92ec18" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.513606 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.517020 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94da3e05-8956-4ab9-b272-46b6afcf14d3","Type":"ContainerDied","Data":"b8fd6cb63c424b333b4de8e09b40e3e7baa8d90333cee6e8439235e989e50774"} Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.517219 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.524719 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e024c88-16fc-4003-bc76-165ac4445e8f","Type":"ContainerStarted","Data":"8b63fbefa478da035b9aad56ffcde081b1f22dd29954b20393c2246535b93843"} Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.545748 4799 scope.go:117] "RemoveContainer" containerID="0b9cd67e2a46975e53cd8d565a95ef99a5d4bd17215447b5c416fcf88beb621d" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.556824 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.580362 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.602304 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: E0216 12:52:51.603078 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="sg-core" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603095 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="sg-core" Feb 16 12:52:51 crc kubenswrapper[4799]: E0216 12:52:51.603115 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-central-agent" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603139 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-central-agent" Feb 16 12:52:51 crc kubenswrapper[4799]: E0216 12:52:51.603154 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-log" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603164 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-log" Feb 16 12:52:51 crc kubenswrapper[4799]: E0216 12:52:51.603174 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-httpd" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603181 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-httpd" Feb 16 12:52:51 crc kubenswrapper[4799]: E0216 12:52:51.603197 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="proxy-httpd" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603203 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="proxy-httpd" Feb 16 12:52:51 crc kubenswrapper[4799]: E0216 12:52:51.603221 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-notification-agent" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603228 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-notification-agent" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603504 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-notification-agent" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603519 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="ceilometer-central-agent" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603532 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-log" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603544 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="sg-core" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603555 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f42733-a32b-4b85-b53d-842ffb840563" containerName="proxy-httpd" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.603578 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" containerName="glance-httpd" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.608261 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.611685 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.612642 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.614118 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.620839 4799 scope.go:117] "RemoveContainer" containerID="adacb7f15ca185ed5bdf64f13c3526e805b3708d8ffeefb41148fef96f86bd18" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.623870 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.633253 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.633785 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.339567883 podStartE2EDuration="19.633768472s" podCreationTimestamp="2026-02-16 12:52:32 +0000 UTC" firstStartedPulling="2026-02-16 12:52:33.312507291 +0000 UTC m=+1258.905522635" lastFinishedPulling="2026-02-16 12:52:50.6067079 +0000 UTC m=+1276.199723224" observedRunningTime="2026-02-16 12:52:51.597436738 +0000 UTC m=+1277.190452072" watchObservedRunningTime="2026-02-16 12:52:51.633768472 +0000 UTC m=+1277.226783806" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649208 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-run-httpd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649286 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64xd\" (UniqueName: \"kubernetes.io/projected/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-kube-api-access-g64xd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649328 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-scripts\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649375 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-log-httpd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649402 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649462 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-config-data\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.649503 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.653293 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.655443 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.661329 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.661700 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.663824 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.686101 4799 scope.go:117] "RemoveContainer" containerID="93c4f9bcdd3b8aa3a53b4baddec229ace78858c76cbc43de1ca4ae9ba436fb08" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.721217 4799 scope.go:117] "RemoveContainer" containerID="bb7dd9ef23c9a03af1905e08d3a8471c8d1354dbdf48d6ce0fd51be004dc695e" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754473 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-log-httpd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754527 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754547 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754574 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-scripts\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754611 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-config-data\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754631 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-config-data\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754652 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0241ff0c-3747-414a-b48e-72ac52d5836a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754675 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754690 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0241ff0c-3747-414a-b48e-72ac52d5836a-logs\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754742 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-run-httpd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754764 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5s4b\" (UniqueName: \"kubernetes.io/projected/0241ff0c-3747-414a-b48e-72ac52d5836a-kube-api-access-m5s4b\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754788 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754815 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64xd\" (UniqueName: \"kubernetes.io/projected/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-kube-api-access-g64xd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754845 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.754866 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-scripts\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.756153 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-log-httpd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.759386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-run-httpd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.761151 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.763066 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-scripts\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.763916 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-config-data\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.764850 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.783571 4799 scope.go:117] "RemoveContainer" containerID="f4d2c422762feab79fce7666498ee40316aba4ac7c11a6a06d7feeb182c32f42" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.789904 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64xd\" (UniqueName: \"kubernetes.io/projected/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-kube-api-access-g64xd\") pod \"ceilometer-0\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.793587 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.793650 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.793703 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.794551 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02716d4728e3df68a334a717adc33b15d61e7b7d0fc4e582388c3db1323e8e1a"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.794622 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://02716d4728e3df68a334a717adc33b15d61e7b7d0fc4e582388c3db1323e8e1a" gracePeriod=600 Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.864840 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5s4b\" (UniqueName: \"kubernetes.io/projected/0241ff0c-3747-414a-b48e-72ac52d5836a-kube-api-access-m5s4b\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.864943 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.865029 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.865207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.865277 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-scripts\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.865353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-config-data\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.865382 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0241ff0c-3747-414a-b48e-72ac52d5836a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.865415 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0241ff0c-3747-414a-b48e-72ac52d5836a-logs\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.867780 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0241ff0c-3747-414a-b48e-72ac52d5836a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.868025 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.868386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0241ff0c-3747-414a-b48e-72ac52d5836a-logs\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.875386 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-config-data\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.876695 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.877432 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.893459 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0241ff0c-3747-414a-b48e-72ac52d5836a-scripts\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.893864 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5s4b\" (UniqueName: \"kubernetes.io/projected/0241ff0c-3747-414a-b48e-72ac52d5836a-kube-api-access-m5s4b\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.927067 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0241ff0c-3747-414a-b48e-72ac52d5836a\") " pod="openstack/glance-default-external-api-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.937941 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:52:51 crc kubenswrapper[4799]: I0216 12:52:51.994282 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.006731 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072274 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-httpd-run\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072386 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-config-data\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072513 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h6cw\" (UniqueName: \"kubernetes.io/projected/4e8b5246-e2d5-4349-aa8c-d58091276c4b-kube-api-access-9h6cw\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072564 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-internal-tls-certs\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072610 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-scripts\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072741 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-logs\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072785 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-combined-ca-bundle\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.072842 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\" (UID: \"4e8b5246-e2d5-4349-aa8c-d58091276c4b\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.083629 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-logs" (OuterVolumeSpecName: "logs") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.084481 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.090161 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.108418 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e8b5246-e2d5-4349-aa8c-d58091276c4b-kube-api-access-9h6cw" (OuterVolumeSpecName: "kube-api-access-9h6cw") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "kube-api-access-9h6cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.108556 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-scripts" (OuterVolumeSpecName: "scripts") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.175568 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h6cw\" (UniqueName: \"kubernetes.io/projected/4e8b5246-e2d5-4349-aa8c-d58091276c4b-kube-api-access-9h6cw\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.175606 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.175618 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.175646 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.175658 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e8b5246-e2d5-4349-aa8c-d58091276c4b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.192416 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.220183 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-config-data" (OuterVolumeSpecName: "config-data") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.227231 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e8b5246-e2d5-4349-aa8c-d58091276c4b" (UID: "4e8b5246-e2d5-4349-aa8c-d58091276c4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.245166 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.277811 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.283975 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.284717 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.284951 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e8b5246-e2d5-4349-aa8c-d58091276c4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.279381 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.404773 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtdq8\" (UniqueName: \"kubernetes.io/projected/02e06f59-2164-4486-9138-2819bf6dcf26-kube-api-access-gtdq8\") pod \"02e06f59-2164-4486-9138-2819bf6dcf26\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.404843 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-config\") pod \"02e06f59-2164-4486-9138-2819bf6dcf26\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.404969 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-ovndb-tls-certs\") pod \"02e06f59-2164-4486-9138-2819bf6dcf26\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.405099 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-combined-ca-bundle\") pod \"02e06f59-2164-4486-9138-2819bf6dcf26\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.405208 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-httpd-config\") pod \"02e06f59-2164-4486-9138-2819bf6dcf26\" (UID: \"02e06f59-2164-4486-9138-2819bf6dcf26\") " Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.411258 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e06f59-2164-4486-9138-2819bf6dcf26-kube-api-access-gtdq8" (OuterVolumeSpecName: "kube-api-access-gtdq8") pod "02e06f59-2164-4486-9138-2819bf6dcf26" (UID: "02e06f59-2164-4486-9138-2819bf6dcf26"). InnerVolumeSpecName "kube-api-access-gtdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.412282 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "02e06f59-2164-4486-9138-2819bf6dcf26" (UID: "02e06f59-2164-4486-9138-2819bf6dcf26"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.490595 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02e06f59-2164-4486-9138-2819bf6dcf26" (UID: "02e06f59-2164-4486-9138-2819bf6dcf26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.508669 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtdq8\" (UniqueName: \"kubernetes.io/projected/02e06f59-2164-4486-9138-2819bf6dcf26-kube-api-access-gtdq8\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.508698 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.508709 4799 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.539207 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4e8b5246-e2d5-4349-aa8c-d58091276c4b","Type":"ContainerDied","Data":"4b959b9c56b48bc24701eae29365abdda20bc300fb61e2ed4f2b614830e3896e"} Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.539292 4799 scope.go:117] "RemoveContainer" containerID="cfa9d73604d9b0f5a0fd775c76ce5af06e86b6ad4bf12c502df9baf9e3f07850" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.539317 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.539961 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-config" (OuterVolumeSpecName: "config") pod "02e06f59-2164-4486-9138-2819bf6dcf26" (UID: "02e06f59-2164-4486-9138-2819bf6dcf26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.570993 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c88d8b85b-zrggw" event={"ID":"02e06f59-2164-4486-9138-2819bf6dcf26","Type":"ContainerDied","Data":"c831e48bb25eac7be26eecad13c174de30305fa594fceb5a362b8bc9056fbb52"} Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.571098 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c88d8b85b-zrggw" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.577816 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "02e06f59-2164-4486-9138-2819bf6dcf26" (UID: "02e06f59-2164-4486-9138-2819bf6dcf26"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.588462 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="02716d4728e3df68a334a717adc33b15d61e7b7d0fc4e582388c3db1323e8e1a" exitCode=0 Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.589005 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"02716d4728e3df68a334a717adc33b15d61e7b7d0fc4e582388c3db1323e8e1a"} Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.618079 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.618143 4799 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e06f59-2164-4486-9138-2819bf6dcf26-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.683452 4799 scope.go:117] "RemoveContainer" containerID="25ddfa785289c44caca61f786064db6c542d2dd00358bf5705b0c8b0c69f1f32" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.698015 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.726875 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.736571 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:52:52 crc kubenswrapper[4799]: E0216 12:52:52.737118 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-log" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737159 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-log" Feb 16 12:52:52 crc kubenswrapper[4799]: E0216 12:52:52.737181 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-httpd" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737191 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-httpd" Feb 16 12:52:52 crc kubenswrapper[4799]: E0216 12:52:52.737209 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-httpd" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737218 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-httpd" Feb 16 12:52:52 crc kubenswrapper[4799]: E0216 12:52:52.737241 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-api" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737250 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-api" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737458 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-httpd" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737486 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-api" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737504 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" containerName="neutron-httpd" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.737523 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" containerName="glance-log" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.741873 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.748630 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.749361 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.756835 4799 scope.go:117] "RemoveContainer" containerID="25aa58840310b30ac173168ef70859089686d547aa93dddc55acca132b627fdd" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.799733 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.812076 4799 scope.go:117] "RemoveContainer" containerID="ab3d851bd648412916a9a4a939adaeb99fa7d4f4478a2a335f301c014dacb378" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.830934 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831177 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71e60503-bb2b-452d-a96a-ef5ec0745d94-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831207 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831418 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e60503-bb2b-452d-a96a-ef5ec0745d94-logs\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831506 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjlqq\" (UniqueName: \"kubernetes.io/projected/71e60503-bb2b-452d-a96a-ef5ec0745d94-kube-api-access-vjlqq\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831614 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831684 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.831702 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.832790 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.872034 4799 scope.go:117] "RemoveContainer" containerID="34c6876ea0db42f2332afd913f232568333ea876303d83a249ce58ef9abe96d8" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.912623 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c88d8b85b-zrggw"] Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.924076 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c88d8b85b-zrggw"] Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941424 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941534 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71e60503-bb2b-452d-a96a-ef5ec0745d94-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941550 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941604 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e60503-bb2b-452d-a96a-ef5ec0745d94-logs\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941638 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjlqq\" (UniqueName: \"kubernetes.io/projected/71e60503-bb2b-452d-a96a-ef5ec0745d94-kube-api-access-vjlqq\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941665 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941698 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.941714 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.942155 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71e60503-bb2b-452d-a96a-ef5ec0745d94-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.943054 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.948570 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e60503-bb2b-452d-a96a-ef5ec0745d94-logs\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.968429 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.968600 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.968705 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.969911 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71e60503-bb2b-452d-a96a-ef5ec0745d94-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:52 crc kubenswrapper[4799]: I0216 12:52:52.977533 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjlqq\" (UniqueName: \"kubernetes.io/projected/71e60503-bb2b-452d-a96a-ef5ec0745d94-kube-api-access-vjlqq\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.004670 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"71e60503-bb2b-452d-a96a-ef5ec0745d94\") " pod="openstack/glance-default-internal-api-0" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.062092 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 12:52:53 crc kubenswrapper[4799]: W0216 12:52:53.064312 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0241ff0c_3747_414a_b48e_72ac52d5836a.slice/crio-2214736752b09825d516d8aad6b190335e23c1a3ed8662d771e86086d83c2333 WatchSource:0}: Error finding container 2214736752b09825d516d8aad6b190335e23c1a3ed8662d771e86086d83c2333: Status 404 returned error can't find the container with id 2214736752b09825d516d8aad6b190335e23c1a3ed8662d771e86086d83c2333 Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.092398 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.161311 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e06f59-2164-4486-9138-2819bf6dcf26" path="/var/lib/kubelet/pods/02e06f59-2164-4486-9138-2819bf6dcf26/volumes" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.163930 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e8b5246-e2d5-4349-aa8c-d58091276c4b" path="/var/lib/kubelet/pods/4e8b5246-e2d5-4349-aa8c-d58091276c4b/volumes" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.164652 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f42733-a32b-4b85-b53d-842ffb840563" path="/var/lib/kubelet/pods/53f42733-a32b-4b85-b53d-842ffb840563/volumes" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.167383 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94da3e05-8956-4ab9-b272-46b6afcf14d3" path="/var/lib/kubelet/pods/94da3e05-8956-4ab9-b272-46b6afcf14d3/volumes" Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.642061 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0241ff0c-3747-414a-b48e-72ac52d5836a","Type":"ContainerStarted","Data":"2214736752b09825d516d8aad6b190335e23c1a3ed8662d771e86086d83c2333"} Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.645516 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"1050488caaf418ecf3c571c9e2581e4f4da347fd70264d129d94529e08845412"} Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.663376 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerStarted","Data":"5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f"} Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.663434 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerStarted","Data":"e15aabf43ce62fa46591d2cfa5623e5caf596e0a3be225e5315f0e9f6d0ab523"} Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.716200 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:52:53 crc kubenswrapper[4799]: I0216 12:52:53.833101 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 12:52:53 crc kubenswrapper[4799]: W0216 12:52:53.847409 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71e60503_bb2b_452d_a96a_ef5ec0745d94.slice/crio-d213a93e69911bb7231064bfc74fe51106bd2929260ccc124f86e7cfe160e579 WatchSource:0}: Error finding container d213a93e69911bb7231064bfc74fe51106bd2929260ccc124f86e7cfe160e579: Status 404 returned error can't find the container with id d213a93e69911bb7231064bfc74fe51106bd2929260ccc124f86e7cfe160e579 Feb 16 12:52:54 crc kubenswrapper[4799]: I0216 12:52:54.678361 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerStarted","Data":"6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e"} Feb 16 12:52:54 crc kubenswrapper[4799]: I0216 12:52:54.682227 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0241ff0c-3747-414a-b48e-72ac52d5836a","Type":"ContainerStarted","Data":"da0c626436480cc2732f4b3f390e5801fe41f7bef8f0d7614e73ba253b5e7b3e"} Feb 16 12:52:54 crc kubenswrapper[4799]: I0216 12:52:54.686231 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71e60503-bb2b-452d-a96a-ef5ec0745d94","Type":"ContainerStarted","Data":"d213a93e69911bb7231064bfc74fe51106bd2929260ccc124f86e7cfe160e579"} Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.698415 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerStarted","Data":"c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246"} Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.700700 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0241ff0c-3747-414a-b48e-72ac52d5836a","Type":"ContainerStarted","Data":"eed26c6c46c6492368750aa47f35a26dfce006be273422151573f485db86e023"} Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.703282 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71e60503-bb2b-452d-a96a-ef5ec0745d94","Type":"ContainerStarted","Data":"122e4d6312f8f5890fc9ceb78135da786792c2e0b0be709e62a2fa79f8ccf628"} Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.703324 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71e60503-bb2b-452d-a96a-ef5ec0745d94","Type":"ContainerStarted","Data":"0d3c545ed1541f6ed7e0fbdba38d68ef9fc25c606801f39a88d5590ba1781175"} Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.733194 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.733176533 podStartE2EDuration="4.733176533s" podCreationTimestamp="2026-02-16 12:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:55.732572866 +0000 UTC m=+1281.325588210" watchObservedRunningTime="2026-02-16 12:52:55.733176533 +0000 UTC m=+1281.326191867" Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.795783 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.795762982 podStartE2EDuration="3.795762982s" podCreationTimestamp="2026-02-16 12:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:52:55.786191037 +0000 UTC m=+1281.379206371" watchObservedRunningTime="2026-02-16 12:52:55.795762982 +0000 UTC m=+1281.388778316" Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.873004 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.873049 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:52:55 crc kubenswrapper[4799]: I0216 12:52:55.873792 4799 scope.go:117] "RemoveContainer" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:52:55 crc kubenswrapper[4799]: E0216 12:52:55.874017 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(89824920-bcd3-4640-b27b-68554fad00bb)\"" pod="openstack/watcher-decision-engine-0" podUID="89824920-bcd3-4640-b27b-68554fad00bb" Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.733589 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerStarted","Data":"6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9"} Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.734266 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.733927 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="sg-core" containerID="cri-o://c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246" gracePeriod=30 Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.733997 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="proxy-httpd" containerID="cri-o://6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9" gracePeriod=30 Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.734015 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-notification-agent" containerID="cri-o://6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e" gracePeriod=30 Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.733760 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-central-agent" containerID="cri-o://5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f" gracePeriod=30 Feb 16 12:52:58 crc kubenswrapper[4799]: I0216 12:52:58.764472 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.108375262 podStartE2EDuration="7.764453265s" podCreationTimestamp="2026-02-16 12:52:51 +0000 UTC" firstStartedPulling="2026-02-16 12:52:52.826285369 +0000 UTC m=+1278.419300703" lastFinishedPulling="2026-02-16 12:52:57.482363372 +0000 UTC m=+1283.075378706" observedRunningTime="2026-02-16 12:52:58.762742545 +0000 UTC m=+1284.355757879" watchObservedRunningTime="2026-02-16 12:52:58.764453265 +0000 UTC m=+1284.357468599" Feb 16 12:52:59 crc kubenswrapper[4799]: I0216 12:52:59.766226 4799 generic.go:334] "Generic (PLEG): container finished" podID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerID="6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9" exitCode=0 Feb 16 12:52:59 crc kubenswrapper[4799]: I0216 12:52:59.766590 4799 generic.go:334] "Generic (PLEG): container finished" podID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerID="c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246" exitCode=2 Feb 16 12:52:59 crc kubenswrapper[4799]: I0216 12:52:59.766603 4799 generic.go:334] "Generic (PLEG): container finished" podID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerID="6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e" exitCode=0 Feb 16 12:52:59 crc kubenswrapper[4799]: I0216 12:52:59.766292 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerDied","Data":"6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9"} Feb 16 12:52:59 crc kubenswrapper[4799]: I0216 12:52:59.766656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerDied","Data":"c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246"} Feb 16 12:52:59 crc kubenswrapper[4799]: I0216 12:52:59.766671 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerDied","Data":"6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e"} Feb 16 12:53:02 crc kubenswrapper[4799]: I0216 12:53:02.007969 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 12:53:02 crc kubenswrapper[4799]: I0216 12:53:02.008563 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 12:53:02 crc kubenswrapper[4799]: I0216 12:53:02.041308 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 12:53:02 crc kubenswrapper[4799]: I0216 12:53:02.058149 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 12:53:02 crc kubenswrapper[4799]: I0216 12:53:02.799081 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 12:53:02 crc kubenswrapper[4799]: I0216 12:53:02.799393 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.092695 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.092804 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.128915 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.143367 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.812843 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.817507 4799 generic.go:334] "Generic (PLEG): container finished" podID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerID="5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f" exitCode=0 Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.817609 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.817616 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerDied","Data":"5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f"} Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.817675 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e","Type":"ContainerDied","Data":"e15aabf43ce62fa46591d2cfa5623e5caf596e0a3be225e5315f0e9f6d0ab523"} Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.817698 4799 scope.go:117] "RemoveContainer" containerID="6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.818448 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.818500 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.849418 4799 scope.go:117] "RemoveContainer" containerID="c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880167 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-config-data\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880349 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-scripts\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880392 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-log-httpd\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880426 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-run-httpd\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880473 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-combined-ca-bundle\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880567 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-sg-core-conf-yaml\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.880616 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g64xd\" (UniqueName: \"kubernetes.io/projected/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-kube-api-access-g64xd\") pod \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\" (UID: \"a8e9fa3f-47d6-4412-8401-7c6b6b640a2e\") " Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.881113 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.881213 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.881792 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.881815 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.893503 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-kube-api-access-g64xd" (OuterVolumeSpecName: "kube-api-access-g64xd") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "kube-api-access-g64xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.896261 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-scripts" (OuterVolumeSpecName: "scripts") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.922639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.984804 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g64xd\" (UniqueName: \"kubernetes.io/projected/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-kube-api-access-g64xd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.984841 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.984849 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:03 crc kubenswrapper[4799]: I0216 12:53:03.988804 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.069432 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-config-data" (OuterVolumeSpecName: "config-data") pod "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" (UID: "a8e9fa3f-47d6-4412-8401-7c6b6b640a2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.086451 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.086501 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.164113 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.173083 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.188450 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.188828 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-central-agent" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.188843 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-central-agent" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.188851 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="sg-core" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.188858 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="sg-core" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.188889 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="proxy-httpd" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.188895 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="proxy-httpd" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.188907 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-notification-agent" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.188913 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-notification-agent" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.189075 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-central-agent" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.189086 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="ceilometer-notification-agent" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.189104 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="sg-core" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.189111 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" containerName="proxy-httpd" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.190758 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.193971 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.194200 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.203710 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.290341 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-config-data\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.290734 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7xm\" (UniqueName: \"kubernetes.io/projected/6830c16d-ca25-432a-a879-c7e5cb64c593-kube-api-access-bm7xm\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.290888 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.291019 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-run-httpd\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.291163 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.291273 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-scripts\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.291395 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-log-httpd\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.308959 4799 scope.go:117] "RemoveContainer" containerID="6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.334994 4799 scope.go:117] "RemoveContainer" containerID="5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393030 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-run-httpd\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393114 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393196 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-scripts\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393249 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-log-httpd\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393349 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-config-data\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393437 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm7xm\" (UniqueName: \"kubernetes.io/projected/6830c16d-ca25-432a-a879-c7e5cb64c593-kube-api-access-bm7xm\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.393474 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.394146 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-run-httpd\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.394284 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-log-httpd\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.398335 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.398928 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-scripts\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.400140 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-config-data\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.404955 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.416021 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm7xm\" (UniqueName: \"kubernetes.io/projected/6830c16d-ca25-432a-a879-c7e5cb64c593-kube-api-access-bm7xm\") pod \"ceilometer-0\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.507256 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.513281 4799 scope.go:117] "RemoveContainer" containerID="6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.513735 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9\": container with ID starting with 6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9 not found: ID does not exist" containerID="6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.513768 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9"} err="failed to get container status \"6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9\": rpc error: code = NotFound desc = could not find container \"6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9\": container with ID starting with 6440bffd06bcfc8e67a17550f7822c0aecc0ac65adc32e1184d6e91068f503c9 not found: ID does not exist" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.513793 4799 scope.go:117] "RemoveContainer" containerID="c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.514059 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246\": container with ID starting with c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246 not found: ID does not exist" containerID="c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.514085 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246"} err="failed to get container status \"c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246\": rpc error: code = NotFound desc = could not find container \"c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246\": container with ID starting with c4ec790a4fb49f317c34a759b6653ff7d35ddac86a7538424c9f65760f59f246 not found: ID does not exist" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.514102 4799 scope.go:117] "RemoveContainer" containerID="6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.514404 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e\": container with ID starting with 6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e not found: ID does not exist" containerID="6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.514432 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e"} err="failed to get container status \"6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e\": rpc error: code = NotFound desc = could not find container \"6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e\": container with ID starting with 6a3a00cdf4f9a5a4bfcf05cd1260778daa745fc1ef48ee81a8d7b536c7d8130e not found: ID does not exist" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.514448 4799 scope.go:117] "RemoveContainer" containerID="5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f" Feb 16 12:53:04 crc kubenswrapper[4799]: E0216 12:53:04.514650 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f\": container with ID starting with 5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f not found: ID does not exist" containerID="5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.514672 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f"} err="failed to get container status \"5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f\": rpc error: code = NotFound desc = could not find container \"5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f\": container with ID starting with 5d59e495cc8f603e8b168cee9861b4e9b5e6b4c461377ea7f9b85934b0803a4f not found: ID does not exist" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.830218 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:53:04 crc kubenswrapper[4799]: I0216 12:53:04.830610 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.095740 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:05 crc kubenswrapper[4799]: W0216 12:53:05.108918 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6830c16d_ca25_432a_a879_c7e5cb64c593.slice/crio-e55a72f9b1c0822f4fe7b95cc772e5680b00855633454f63f14200ae2c5d38da WatchSource:0}: Error finding container e55a72f9b1c0822f4fe7b95cc772e5680b00855633454f63f14200ae2c5d38da: Status 404 returned error can't find the container with id e55a72f9b1c0822f4fe7b95cc772e5680b00855633454f63f14200ae2c5d38da Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.162388 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e9fa3f-47d6-4412-8401-7c6b6b640a2e" path="/var/lib/kubelet/pods/a8e9fa3f-47d6-4412-8401-7c6b6b640a2e/volumes" Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.551361 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.763935 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.840106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerStarted","Data":"ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06"} Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.840470 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerStarted","Data":"29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639"} Feb 16 12:53:05 crc kubenswrapper[4799]: I0216 12:53:05.840567 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerStarted","Data":"e55a72f9b1c0822f4fe7b95cc772e5680b00855633454f63f14200ae2c5d38da"} Feb 16 12:53:06 crc kubenswrapper[4799]: I0216 12:53:06.527494 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:06 crc kubenswrapper[4799]: I0216 12:53:06.527899 4799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 12:53:06 crc kubenswrapper[4799]: I0216 12:53:06.854629 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 12:53:08 crc kubenswrapper[4799]: I0216 12:53:08.136789 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:08 crc kubenswrapper[4799]: I0216 12:53:08.872054 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerStarted","Data":"e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f"} Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.886020 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-central-agent" containerID="cri-o://29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639" gracePeriod=30 Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.886179 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-notification-agent" containerID="cri-o://ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06" gracePeriod=30 Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.885847 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerStarted","Data":"0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c"} Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.886162 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="sg-core" containerID="cri-o://e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f" gracePeriod=30 Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.887560 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.886116 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="proxy-httpd" containerID="cri-o://0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c" gracePeriod=30 Feb 16 12:53:09 crc kubenswrapper[4799]: I0216 12:53:09.922951 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.519832508 podStartE2EDuration="5.9229269s" podCreationTimestamp="2026-02-16 12:53:04 +0000 UTC" firstStartedPulling="2026-02-16 12:53:05.111749699 +0000 UTC m=+1290.704765033" lastFinishedPulling="2026-02-16 12:53:09.514844091 +0000 UTC m=+1295.107859425" observedRunningTime="2026-02-16 12:53:09.91422999 +0000 UTC m=+1295.507245324" watchObservedRunningTime="2026-02-16 12:53:09.9229269 +0000 UTC m=+1295.515942234" Feb 16 12:53:10 crc kubenswrapper[4799]: I0216 12:53:10.150103 4799 scope.go:117] "RemoveContainer" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:53:10 crc kubenswrapper[4799]: I0216 12:53:10.899418 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerStarted","Data":"121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6"} Feb 16 12:53:10 crc kubenswrapper[4799]: I0216 12:53:10.902495 4799 generic.go:334] "Generic (PLEG): container finished" podID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerID="e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f" exitCode=2 Feb 16 12:53:10 crc kubenswrapper[4799]: I0216 12:53:10.902531 4799 generic.go:334] "Generic (PLEG): container finished" podID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerID="ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06" exitCode=0 Feb 16 12:53:10 crc kubenswrapper[4799]: I0216 12:53:10.902534 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerDied","Data":"e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f"} Feb 16 12:53:10 crc kubenswrapper[4799]: I0216 12:53:10.902573 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerDied","Data":"ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06"} Feb 16 12:53:12 crc kubenswrapper[4799]: I0216 12:53:12.939192 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2j7p7"] Feb 16 12:53:12 crc kubenswrapper[4799]: I0216 12:53:12.941135 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:12 crc kubenswrapper[4799]: I0216 12:53:12.960892 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2j7p7"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.004907 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cmgtj"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.006213 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.039976 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-523c-account-create-update-ldgmh"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.041627 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.044530 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.070403 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cmgtj"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.082499 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-523c-account-create-update-ldgmh"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.091403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qgd\" (UniqueName: \"kubernetes.io/projected/3f27d260-32a5-4071-b01e-5674ddf856ec-kube-api-access-x8qgd\") pod \"nova-api-db-create-2j7p7\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.091535 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f27d260-32a5-4071-b01e-5674ddf856ec-operator-scripts\") pod \"nova-api-db-create-2j7p7\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.124269 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qqjbv"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.125651 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.165821 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qqjbv"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.193098 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qgd\" (UniqueName: \"kubernetes.io/projected/3f27d260-32a5-4071-b01e-5674ddf856ec-kube-api-access-x8qgd\") pod \"nova-api-db-create-2j7p7\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.193178 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fe0ab4-e31e-46ec-9e5e-d806b8423138-operator-scripts\") pod \"nova-api-523c-account-create-update-ldgmh\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.193256 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msr4\" (UniqueName: \"kubernetes.io/projected/36fe0ab4-e31e-46ec-9e5e-d806b8423138-kube-api-access-9msr4\") pod \"nova-api-523c-account-create-update-ldgmh\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.193319 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgpp\" (UniqueName: \"kubernetes.io/projected/db161b46-fe7a-4bd4-826b-052cbcef338f-kube-api-access-7bgpp\") pod \"nova-cell0-db-create-cmgtj\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.193354 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db161b46-fe7a-4bd4-826b-052cbcef338f-operator-scripts\") pod \"nova-cell0-db-create-cmgtj\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.193389 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f27d260-32a5-4071-b01e-5674ddf856ec-operator-scripts\") pod \"nova-api-db-create-2j7p7\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.194466 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f27d260-32a5-4071-b01e-5674ddf856ec-operator-scripts\") pod \"nova-api-db-create-2j7p7\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.221340 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0b02-account-create-update-pbs7h"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.222002 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qgd\" (UniqueName: \"kubernetes.io/projected/3f27d260-32a5-4071-b01e-5674ddf856ec-kube-api-access-x8qgd\") pod \"nova-api-db-create-2j7p7\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.225245 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.230067 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b02-account-create-update-pbs7h"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.230256 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.266584 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.297673 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fe0ab4-e31e-46ec-9e5e-d806b8423138-operator-scripts\") pod \"nova-api-523c-account-create-update-ldgmh\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.297751 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rx2r\" (UniqueName: \"kubernetes.io/projected/e827d55c-315b-4615-bddb-71bef534c284-kube-api-access-2rx2r\") pod \"nova-cell1-db-create-qqjbv\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.297800 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9msr4\" (UniqueName: \"kubernetes.io/projected/36fe0ab4-e31e-46ec-9e5e-d806b8423138-kube-api-access-9msr4\") pod \"nova-api-523c-account-create-update-ldgmh\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.297855 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgpp\" (UniqueName: \"kubernetes.io/projected/db161b46-fe7a-4bd4-826b-052cbcef338f-kube-api-access-7bgpp\") pod \"nova-cell0-db-create-cmgtj\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.297888 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db161b46-fe7a-4bd4-826b-052cbcef338f-operator-scripts\") pod \"nova-cell0-db-create-cmgtj\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.297947 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e827d55c-315b-4615-bddb-71bef534c284-operator-scripts\") pod \"nova-cell1-db-create-qqjbv\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.298923 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db161b46-fe7a-4bd4-826b-052cbcef338f-operator-scripts\") pod \"nova-cell0-db-create-cmgtj\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.299440 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fe0ab4-e31e-46ec-9e5e-d806b8423138-operator-scripts\") pod \"nova-api-523c-account-create-update-ldgmh\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.322722 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgpp\" (UniqueName: \"kubernetes.io/projected/db161b46-fe7a-4bd4-826b-052cbcef338f-kube-api-access-7bgpp\") pod \"nova-cell0-db-create-cmgtj\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.331639 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msr4\" (UniqueName: \"kubernetes.io/projected/36fe0ab4-e31e-46ec-9e5e-d806b8423138-kube-api-access-9msr4\") pod \"nova-api-523c-account-create-update-ldgmh\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.339112 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.367110 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.402342 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rx2r\" (UniqueName: \"kubernetes.io/projected/e827d55c-315b-4615-bddb-71bef534c284-kube-api-access-2rx2r\") pod \"nova-cell1-db-create-qqjbv\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.402417 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-operator-scripts\") pod \"nova-cell0-0b02-account-create-update-pbs7h\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.402468 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e827d55c-315b-4615-bddb-71bef534c284-operator-scripts\") pod \"nova-cell1-db-create-qqjbv\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.402530 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t68vw\" (UniqueName: \"kubernetes.io/projected/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-kube-api-access-t68vw\") pod \"nova-cell0-0b02-account-create-update-pbs7h\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.403957 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e827d55c-315b-4615-bddb-71bef534c284-operator-scripts\") pod \"nova-cell1-db-create-qqjbv\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.450691 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-378d-account-create-update-sjsz5"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.452495 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.463312 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rx2r\" (UniqueName: \"kubernetes.io/projected/e827d55c-315b-4615-bddb-71bef534c284-kube-api-access-2rx2r\") pod \"nova-cell1-db-create-qqjbv\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.470304 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-378d-account-create-update-sjsz5"] Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.471497 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.508511 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-operator-scripts\") pod \"nova-cell0-0b02-account-create-update-pbs7h\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.508659 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t68vw\" (UniqueName: \"kubernetes.io/projected/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-kube-api-access-t68vw\") pod \"nova-cell0-0b02-account-create-update-pbs7h\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.510061 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-operator-scripts\") pod \"nova-cell0-0b02-account-create-update-pbs7h\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.546636 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t68vw\" (UniqueName: \"kubernetes.io/projected/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-kube-api-access-t68vw\") pod \"nova-cell0-0b02-account-create-update-pbs7h\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.594777 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.611437 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvxj\" (UniqueName: \"kubernetes.io/projected/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-kube-api-access-ggvxj\") pod \"nova-cell1-378d-account-create-update-sjsz5\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.611512 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-operator-scripts\") pod \"nova-cell1-378d-account-create-update-sjsz5\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.719351 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvxj\" (UniqueName: \"kubernetes.io/projected/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-kube-api-access-ggvxj\") pod \"nova-cell1-378d-account-create-update-sjsz5\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.719433 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-operator-scripts\") pod \"nova-cell1-378d-account-create-update-sjsz5\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.721781 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-operator-scripts\") pod \"nova-cell1-378d-account-create-update-sjsz5\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.742134 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvxj\" (UniqueName: \"kubernetes.io/projected/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-kube-api-access-ggvxj\") pod \"nova-cell1-378d-account-create-update-sjsz5\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.756566 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.842387 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:13 crc kubenswrapper[4799]: I0216 12:53:13.971422 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2j7p7"] Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.193220 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cmgtj"] Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.201748 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-523c-account-create-update-ldgmh"] Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.438787 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b02-account-create-update-pbs7h"] Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.577826 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-378d-account-create-update-sjsz5"] Feb 16 12:53:14 crc kubenswrapper[4799]: W0216 12:53:14.614056 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode827d55c_315b_4615_bddb_71bef534c284.slice/crio-4c4d797ca5d8962cb7ac6652f1b54dbfb1debfe3ac56b3dc7ba367ef4b3ecb5e WatchSource:0}: Error finding container 4c4d797ca5d8962cb7ac6652f1b54dbfb1debfe3ac56b3dc7ba367ef4b3ecb5e: Status 404 returned error can't find the container with id 4c4d797ca5d8962cb7ac6652f1b54dbfb1debfe3ac56b3dc7ba367ef4b3ecb5e Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.615890 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qqjbv"] Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.991839 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cmgtj" event={"ID":"db161b46-fe7a-4bd4-826b-052cbcef338f","Type":"ContainerStarted","Data":"d803bea1cd9c8673e2dfabb749747cac89d07d53e8122df4ce16a4ab73dc0994"} Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.991886 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cmgtj" event={"ID":"db161b46-fe7a-4bd4-826b-052cbcef338f","Type":"ContainerStarted","Data":"823f7e06378ac3babba99b9ba4fe6205e2da18e1a950304bb231cd28e5f77a2c"} Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.998405 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qqjbv" event={"ID":"e827d55c-315b-4615-bddb-71bef534c284","Type":"ContainerStarted","Data":"79f693d265a2142285cebcddd5a5f46075ebfe497bbbc8ceb870e9b848ae7a28"} Feb 16 12:53:14 crc kubenswrapper[4799]: I0216 12:53:14.998454 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qqjbv" event={"ID":"e827d55c-315b-4615-bddb-71bef534c284","Type":"ContainerStarted","Data":"4c4d797ca5d8962cb7ac6652f1b54dbfb1debfe3ac56b3dc7ba367ef4b3ecb5e"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.008034 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-523c-account-create-update-ldgmh" event={"ID":"36fe0ab4-e31e-46ec-9e5e-d806b8423138","Type":"ContainerStarted","Data":"36423b169d8031e33fcf223682e5abfd19f1e2465c3295f2fe025d97c32be5b5"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.008108 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-523c-account-create-update-ldgmh" event={"ID":"36fe0ab4-e31e-46ec-9e5e-d806b8423138","Type":"ContainerStarted","Data":"fd8c010a5d393bf2f961bba2858780a5e7a51b7cabaf5a9dec0032dfbc3bcc47"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.016310 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7p7" event={"ID":"3f27d260-32a5-4071-b01e-5674ddf856ec","Type":"ContainerStarted","Data":"f9df83a5d1c04e808d4d590a0b9a71370a73e735e110822ca0ca7b8014bf2552"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.016351 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7p7" event={"ID":"3f27d260-32a5-4071-b01e-5674ddf856ec","Type":"ContainerStarted","Data":"d10328f91457339363bb46b60e83eb67473521ccf8decad80cd1a9845a5efc14"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.017516 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-cmgtj" podStartSLOduration=3.017498762 podStartE2EDuration="3.017498762s" podCreationTimestamp="2026-02-16 12:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:15.008109904 +0000 UTC m=+1300.601125238" watchObservedRunningTime="2026-02-16 12:53:15.017498762 +0000 UTC m=+1300.610514096" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.030771 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qqjbv" podStartSLOduration=2.030750612 podStartE2EDuration="2.030750612s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:15.025170222 +0000 UTC m=+1300.618185556" watchObservedRunningTime="2026-02-16 12:53:15.030750612 +0000 UTC m=+1300.623765946" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.036212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" event={"ID":"b2be1ba0-aac6-4d75-a35f-31ba41b971d5","Type":"ContainerStarted","Data":"399ed703e0088bb71a40985c8e04235e692594d2e384f0dc895d67186f47f1de"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.036277 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" event={"ID":"b2be1ba0-aac6-4d75-a35f-31ba41b971d5","Type":"ContainerStarted","Data":"aab4cc5e1d544337a7ff6a7b4deea63577d3c0fc0b4a4405b2c915dac6a4d0e1"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.048295 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-523c-account-create-update-ldgmh" podStartSLOduration=3.048276583 podStartE2EDuration="3.048276583s" podCreationTimestamp="2026-02-16 12:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:15.038544385 +0000 UTC m=+1300.631559719" watchObservedRunningTime="2026-02-16 12:53:15.048276583 +0000 UTC m=+1300.641291917" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.050682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" event={"ID":"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d","Type":"ContainerStarted","Data":"4ef2d75c52641ba881694cedc7579fa2cacc77fe30ba7c7be4450f8d720c268c"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.050727 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" event={"ID":"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d","Type":"ContainerStarted","Data":"8b1b641199748b847c855f51f53ad092f6257ddf2ec5384b51126de5cfbbdfe7"} Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.062722 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" podStartSLOduration=2.062701586 podStartE2EDuration="2.062701586s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:15.055636664 +0000 UTC m=+1300.648651988" watchObservedRunningTime="2026-02-16 12:53:15.062701586 +0000 UTC m=+1300.655716920" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.081986 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-2j7p7" podStartSLOduration=3.081971777 podStartE2EDuration="3.081971777s" podCreationTimestamp="2026-02-16 12:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:15.080522535 +0000 UTC m=+1300.673537869" watchObservedRunningTime="2026-02-16 12:53:15.081971777 +0000 UTC m=+1300.674987111" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.097377 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" podStartSLOduration=2.097358057 podStartE2EDuration="2.097358057s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:15.09431188 +0000 UTC m=+1300.687327214" watchObservedRunningTime="2026-02-16 12:53:15.097358057 +0000 UTC m=+1300.690373391" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.872583 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:15 crc kubenswrapper[4799]: I0216 12:53:15.900935 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.066397 4799 generic.go:334] "Generic (PLEG): container finished" podID="3f27d260-32a5-4071-b01e-5674ddf856ec" containerID="f9df83a5d1c04e808d4d590a0b9a71370a73e735e110822ca0ca7b8014bf2552" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.066474 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7p7" event={"ID":"3f27d260-32a5-4071-b01e-5674ddf856ec","Type":"ContainerDied","Data":"f9df83a5d1c04e808d4d590a0b9a71370a73e735e110822ca0ca7b8014bf2552"} Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.078342 4799 generic.go:334] "Generic (PLEG): container finished" podID="b2be1ba0-aac6-4d75-a35f-31ba41b971d5" containerID="399ed703e0088bb71a40985c8e04235e692594d2e384f0dc895d67186f47f1de" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.078515 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" event={"ID":"b2be1ba0-aac6-4d75-a35f-31ba41b971d5","Type":"ContainerDied","Data":"399ed703e0088bb71a40985c8e04235e692594d2e384f0dc895d67186f47f1de"} Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.080164 4799 generic.go:334] "Generic (PLEG): container finished" podID="e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" containerID="4ef2d75c52641ba881694cedc7579fa2cacc77fe30ba7c7be4450f8d720c268c" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.080230 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" event={"ID":"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d","Type":"ContainerDied","Data":"4ef2d75c52641ba881694cedc7579fa2cacc77fe30ba7c7be4450f8d720c268c"} Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.088997 4799 generic.go:334] "Generic (PLEG): container finished" podID="db161b46-fe7a-4bd4-826b-052cbcef338f" containerID="d803bea1cd9c8673e2dfabb749747cac89d07d53e8122df4ce16a4ab73dc0994" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.089181 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cmgtj" event={"ID":"db161b46-fe7a-4bd4-826b-052cbcef338f","Type":"ContainerDied","Data":"d803bea1cd9c8673e2dfabb749747cac89d07d53e8122df4ce16a4ab73dc0994"} Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.097795 4799 generic.go:334] "Generic (PLEG): container finished" podID="e827d55c-315b-4615-bddb-71bef534c284" containerID="79f693d265a2142285cebcddd5a5f46075ebfe497bbbc8ceb870e9b848ae7a28" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.098039 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qqjbv" event={"ID":"e827d55c-315b-4615-bddb-71bef534c284","Type":"ContainerDied","Data":"79f693d265a2142285cebcddd5a5f46075ebfe497bbbc8ceb870e9b848ae7a28"} Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.107280 4799 generic.go:334] "Generic (PLEG): container finished" podID="36fe0ab4-e31e-46ec-9e5e-d806b8423138" containerID="36423b169d8031e33fcf223682e5abfd19f1e2465c3295f2fe025d97c32be5b5" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.107983 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-523c-account-create-update-ldgmh" event={"ID":"36fe0ab4-e31e-46ec-9e5e-d806b8423138","Type":"ContainerDied","Data":"36423b169d8031e33fcf223682e5abfd19f1e2465c3295f2fe025d97c32be5b5"} Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.108200 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.143589 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:16 crc kubenswrapper[4799]: I0216 12:53:16.228036 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.566228 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.631689 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fe0ab4-e31e-46ec-9e5e-d806b8423138-operator-scripts\") pod \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.631941 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9msr4\" (UniqueName: \"kubernetes.io/projected/36fe0ab4-e31e-46ec-9e5e-d806b8423138-kube-api-access-9msr4\") pod \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\" (UID: \"36fe0ab4-e31e-46ec-9e5e-d806b8423138\") " Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.633178 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36fe0ab4-e31e-46ec-9e5e-d806b8423138-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36fe0ab4-e31e-46ec-9e5e-d806b8423138" (UID: "36fe0ab4-e31e-46ec-9e5e-d806b8423138"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.639143 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fe0ab4-e31e-46ec-9e5e-d806b8423138-kube-api-access-9msr4" (OuterVolumeSpecName: "kube-api-access-9msr4") pod "36fe0ab4-e31e-46ec-9e5e-d806b8423138" (UID: "36fe0ab4-e31e-46ec-9e5e-d806b8423138"). InnerVolumeSpecName "kube-api-access-9msr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.734144 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9msr4\" (UniqueName: \"kubernetes.io/projected/36fe0ab4-e31e-46ec-9e5e-d806b8423138-kube-api-access-9msr4\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.734188 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fe0ab4-e31e-46ec-9e5e-d806b8423138-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.908411 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.937684 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.943640 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-operator-scripts\") pod \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.943851 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t68vw\" (UniqueName: \"kubernetes.io/projected/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-kube-api-access-t68vw\") pod \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\" (UID: \"b2be1ba0-aac6-4d75-a35f-31ba41b971d5\") " Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.944830 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2be1ba0-aac6-4d75-a35f-31ba41b971d5" (UID: "b2be1ba0-aac6-4d75-a35f-31ba41b971d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.949470 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.955045 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-kube-api-access-t68vw" (OuterVolumeSpecName: "kube-api-access-t68vw") pod "b2be1ba0-aac6-4d75-a35f-31ba41b971d5" (UID: "b2be1ba0-aac6-4d75-a35f-31ba41b971d5"). InnerVolumeSpecName "kube-api-access-t68vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:17 crc kubenswrapper[4799]: I0216 12:53:17.985999 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.050228 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8qgd\" (UniqueName: \"kubernetes.io/projected/3f27d260-32a5-4071-b01e-5674ddf856ec-kube-api-access-x8qgd\") pod \"3f27d260-32a5-4071-b01e-5674ddf856ec\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.050366 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db161b46-fe7a-4bd4-826b-052cbcef338f-operator-scripts\") pod \"db161b46-fe7a-4bd4-826b-052cbcef338f\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.050390 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e827d55c-315b-4615-bddb-71bef534c284-operator-scripts\") pod \"e827d55c-315b-4615-bddb-71bef534c284\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.050440 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rx2r\" (UniqueName: \"kubernetes.io/projected/e827d55c-315b-4615-bddb-71bef534c284-kube-api-access-2rx2r\") pod \"e827d55c-315b-4615-bddb-71bef534c284\" (UID: \"e827d55c-315b-4615-bddb-71bef534c284\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.050464 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f27d260-32a5-4071-b01e-5674ddf856ec-operator-scripts\") pod \"3f27d260-32a5-4071-b01e-5674ddf856ec\" (UID: \"3f27d260-32a5-4071-b01e-5674ddf856ec\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.050640 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgpp\" (UniqueName: \"kubernetes.io/projected/db161b46-fe7a-4bd4-826b-052cbcef338f-kube-api-access-7bgpp\") pod \"db161b46-fe7a-4bd4-826b-052cbcef338f\" (UID: \"db161b46-fe7a-4bd4-826b-052cbcef338f\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.051098 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t68vw\" (UniqueName: \"kubernetes.io/projected/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-kube-api-access-t68vw\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.051115 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2be1ba0-aac6-4d75-a35f-31ba41b971d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.051383 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e827d55c-315b-4615-bddb-71bef534c284-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e827d55c-315b-4615-bddb-71bef534c284" (UID: "e827d55c-315b-4615-bddb-71bef534c284"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.057658 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e827d55c-315b-4615-bddb-71bef534c284-kube-api-access-2rx2r" (OuterVolumeSpecName: "kube-api-access-2rx2r") pod "e827d55c-315b-4615-bddb-71bef534c284" (UID: "e827d55c-315b-4615-bddb-71bef534c284"). InnerVolumeSpecName "kube-api-access-2rx2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.058046 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f27d260-32a5-4071-b01e-5674ddf856ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f27d260-32a5-4071-b01e-5674ddf856ec" (UID: "3f27d260-32a5-4071-b01e-5674ddf856ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.058341 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db161b46-fe7a-4bd4-826b-052cbcef338f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db161b46-fe7a-4bd4-826b-052cbcef338f" (UID: "db161b46-fe7a-4bd4-826b-052cbcef338f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.075307 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db161b46-fe7a-4bd4-826b-052cbcef338f-kube-api-access-7bgpp" (OuterVolumeSpecName: "kube-api-access-7bgpp") pod "db161b46-fe7a-4bd4-826b-052cbcef338f" (UID: "db161b46-fe7a-4bd4-826b-052cbcef338f"). InnerVolumeSpecName "kube-api-access-7bgpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.075736 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f27d260-32a5-4071-b01e-5674ddf856ec-kube-api-access-x8qgd" (OuterVolumeSpecName: "kube-api-access-x8qgd") pod "3f27d260-32a5-4071-b01e-5674ddf856ec" (UID: "3f27d260-32a5-4071-b01e-5674ddf856ec"). InnerVolumeSpecName "kube-api-access-x8qgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.139174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" event={"ID":"b2be1ba0-aac6-4d75-a35f-31ba41b971d5","Type":"ContainerDied","Data":"aab4cc5e1d544337a7ff6a7b4deea63577d3c0fc0b4a4405b2c915dac6a4d0e1"} Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.139230 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab4cc5e1d544337a7ff6a7b4deea63577d3c0fc0b4a4405b2c915dac6a4d0e1" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.139306 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b02-account-create-update-pbs7h" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.147901 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cmgtj" event={"ID":"db161b46-fe7a-4bd4-826b-052cbcef338f","Type":"ContainerDied","Data":"823f7e06378ac3babba99b9ba4fe6205e2da18e1a950304bb231cd28e5f77a2c"} Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.147946 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823f7e06378ac3babba99b9ba4fe6205e2da18e1a950304bb231cd28e5f77a2c" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.148009 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cmgtj" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.149643 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qqjbv" event={"ID":"e827d55c-315b-4615-bddb-71bef534c284","Type":"ContainerDied","Data":"4c4d797ca5d8962cb7ac6652f1b54dbfb1debfe3ac56b3dc7ba367ef4b3ecb5e"} Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.149691 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4d797ca5d8962cb7ac6652f1b54dbfb1debfe3ac56b3dc7ba367ef4b3ecb5e" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.149766 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qqjbv" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.156742 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgpp\" (UniqueName: \"kubernetes.io/projected/db161b46-fe7a-4bd4-826b-052cbcef338f-kube-api-access-7bgpp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.156961 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8qgd\" (UniqueName: \"kubernetes.io/projected/3f27d260-32a5-4071-b01e-5674ddf856ec-kube-api-access-x8qgd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157040 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db161b46-fe7a-4bd4-826b-052cbcef338f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157159 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e827d55c-315b-4615-bddb-71bef534c284-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157221 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rx2r\" (UniqueName: \"kubernetes.io/projected/e827d55c-315b-4615-bddb-71bef534c284-kube-api-access-2rx2r\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157290 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f27d260-32a5-4071-b01e-5674ddf856ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157302 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-523c-account-create-update-ldgmh" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157234 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-523c-account-create-update-ldgmh" event={"ID":"36fe0ab4-e31e-46ec-9e5e-d806b8423138","Type":"ContainerDied","Data":"fd8c010a5d393bf2f961bba2858780a5e7a51b7cabaf5a9dec0032dfbc3bcc47"} Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.157784 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8c010a5d393bf2f961bba2858780a5e7a51b7cabaf5a9dec0032dfbc3bcc47" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.158903 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" containerID="cri-o://121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6" gracePeriod=30 Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.159209 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7p7" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.161377 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7p7" event={"ID":"3f27d260-32a5-4071-b01e-5674ddf856ec","Type":"ContainerDied","Data":"d10328f91457339363bb46b60e83eb67473521ccf8decad80cd1a9845a5efc14"} Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.161483 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d10328f91457339363bb46b60e83eb67473521ccf8decad80cd1a9845a5efc14" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.208301 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.258979 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-operator-scripts\") pod \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.259135 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggvxj\" (UniqueName: \"kubernetes.io/projected/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-kube-api-access-ggvxj\") pod \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\" (UID: \"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d\") " Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.259926 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" (UID: "e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.262443 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-kube-api-access-ggvxj" (OuterVolumeSpecName: "kube-api-access-ggvxj") pod "e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" (UID: "e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d"). InnerVolumeSpecName "kube-api-access-ggvxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.361378 4799 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:18 crc kubenswrapper[4799]: I0216 12:53:18.361409 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggvxj\" (UniqueName: \"kubernetes.io/projected/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d-kube-api-access-ggvxj\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:19 crc kubenswrapper[4799]: I0216 12:53:19.168390 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" event={"ID":"e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d","Type":"ContainerDied","Data":"8b1b641199748b847c855f51f53ad092f6257ddf2ec5384b51126de5cfbbdfe7"} Feb 16 12:53:19 crc kubenswrapper[4799]: I0216 12:53:19.168780 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b1b641199748b847c855f51f53ad092f6257ddf2ec5384b51126de5cfbbdfe7" Feb 16 12:53:19 crc kubenswrapper[4799]: I0216 12:53:19.168856 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-378d-account-create-update-sjsz5" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.610490 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.717286 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-custom-prometheus-ca\") pod \"89824920-bcd3-4640-b27b-68554fad00bb\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.717369 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp8r9\" (UniqueName: \"kubernetes.io/projected/89824920-bcd3-4640-b27b-68554fad00bb-kube-api-access-cp8r9\") pod \"89824920-bcd3-4640-b27b-68554fad00bb\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.717397 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-combined-ca-bundle\") pod \"89824920-bcd3-4640-b27b-68554fad00bb\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.717529 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-config-data\") pod \"89824920-bcd3-4640-b27b-68554fad00bb\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.717552 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89824920-bcd3-4640-b27b-68554fad00bb-logs\") pod \"89824920-bcd3-4640-b27b-68554fad00bb\" (UID: \"89824920-bcd3-4640-b27b-68554fad00bb\") " Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.718211 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89824920-bcd3-4640-b27b-68554fad00bb-logs" (OuterVolumeSpecName: "logs") pod "89824920-bcd3-4640-b27b-68554fad00bb" (UID: "89824920-bcd3-4640-b27b-68554fad00bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.723678 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89824920-bcd3-4640-b27b-68554fad00bb-kube-api-access-cp8r9" (OuterVolumeSpecName: "kube-api-access-cp8r9") pod "89824920-bcd3-4640-b27b-68554fad00bb" (UID: "89824920-bcd3-4640-b27b-68554fad00bb"). InnerVolumeSpecName "kube-api-access-cp8r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.750198 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89824920-bcd3-4640-b27b-68554fad00bb" (UID: "89824920-bcd3-4640-b27b-68554fad00bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.764007 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "89824920-bcd3-4640-b27b-68554fad00bb" (UID: "89824920-bcd3-4640-b27b-68554fad00bb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.791729 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-config-data" (OuterVolumeSpecName: "config-data") pod "89824920-bcd3-4640-b27b-68554fad00bb" (UID: "89824920-bcd3-4640-b27b-68554fad00bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.820375 4799 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.820414 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp8r9\" (UniqueName: \"kubernetes.io/projected/89824920-bcd3-4640-b27b-68554fad00bb-kube-api-access-cp8r9\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.820426 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.820439 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89824920-bcd3-4640-b27b-68554fad00bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:20 crc kubenswrapper[4799]: I0216 12:53:20.820447 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89824920-bcd3-4640-b27b-68554fad00bb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.190775 4799 generic.go:334] "Generic (PLEG): container finished" podID="89824920-bcd3-4640-b27b-68554fad00bb" containerID="121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6" exitCode=0 Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.190828 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerDied","Data":"121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6"} Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.190862 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"89824920-bcd3-4640-b27b-68554fad00bb","Type":"ContainerDied","Data":"23ab338826720b60112701afacadbe59b97e0574d958aeed561e5c4e6cd4c569"} Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.190886 4799 scope.go:117] "RemoveContainer" containerID="121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.190835 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.230068 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.246025 4799 scope.go:117] "RemoveContainer" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.250184 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266163 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266723 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266746 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266758 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fe0ab4-e31e-46ec-9e5e-d806b8423138" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266766 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fe0ab4-e31e-46ec-9e5e-d806b8423138" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266776 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e827d55c-315b-4615-bddb-71bef534c284" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266784 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e827d55c-315b-4615-bddb-71bef534c284" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266795 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266802 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266822 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266830 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266841 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266848 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266868 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266876 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266894 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27d260-32a5-4071-b01e-5674ddf856ec" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266901 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27d260-32a5-4071-b01e-5674ddf856ec" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266916 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2be1ba0-aac6-4d75-a35f-31ba41b971d5" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266925 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2be1ba0-aac6-4d75-a35f-31ba41b971d5" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.266943 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db161b46-fe7a-4bd4-826b-052cbcef338f" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.266950 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="db161b46-fe7a-4bd4-826b-052cbcef338f" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267198 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267211 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e827d55c-315b-4615-bddb-71bef534c284" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267224 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f27d260-32a5-4071-b01e-5674ddf856ec" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267238 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267250 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2be1ba0-aac6-4d75-a35f-31ba41b971d5" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267262 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267273 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fe0ab4-e31e-46ec-9e5e-d806b8423138" containerName="mariadb-account-create-update" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.267287 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="db161b46-fe7a-4bd4-826b-052cbcef338f" containerName="mariadb-database-create" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.268210 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.272143 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.278682 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.332620 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.332866 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15e35f6-4998-4a70-9f95-272ba07a39ef-logs\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.332997 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qf7\" (UniqueName: \"kubernetes.io/projected/a15e35f6-4998-4a70-9f95-272ba07a39ef-kube-api-access-l5qf7\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.333030 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.333178 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.355525 4799 scope.go:117] "RemoveContainer" containerID="121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.356690 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6\": container with ID starting with 121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6 not found: ID does not exist" containerID="121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.356834 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6"} err="failed to get container status \"121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6\": rpc error: code = NotFound desc = could not find container \"121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6\": container with ID starting with 121a372e110514707826ca4518bab29e8067a8b0cbed33139884fcef61fb16b6 not found: ID does not exist" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.356969 4799 scope.go:117] "RemoveContainer" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:53:21 crc kubenswrapper[4799]: E0216 12:53:21.357365 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02\": container with ID starting with 7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02 not found: ID does not exist" containerID="7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.357415 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02"} err="failed to get container status \"7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02\": rpc error: code = NotFound desc = could not find container \"7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02\": container with ID starting with 7b120c2cea9b9f36fb4f622f8ddd744146323f1bdde89b8ff6c4e79dfa0f9e02 not found: ID does not exist" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.435782 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15e35f6-4998-4a70-9f95-272ba07a39ef-logs\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.435883 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qf7\" (UniqueName: \"kubernetes.io/projected/a15e35f6-4998-4a70-9f95-272ba07a39ef-kube-api-access-l5qf7\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.435912 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.435995 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.436093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.436354 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15e35f6-4998-4a70-9f95-272ba07a39ef-logs\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.440241 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.440735 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.449969 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15e35f6-4998-4a70-9f95-272ba07a39ef-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.468781 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qf7\" (UniqueName: \"kubernetes.io/projected/a15e35f6-4998-4a70-9f95-272ba07a39ef-kube-api-access-l5qf7\") pod \"watcher-decision-engine-0\" (UID: \"a15e35f6-4998-4a70-9f95-272ba07a39ef\") " pod="openstack/watcher-decision-engine-0" Feb 16 12:53:21 crc kubenswrapper[4799]: I0216 12:53:21.652258 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:22 crc kubenswrapper[4799]: W0216 12:53:22.200953 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15e35f6_4998_4a70_9f95_272ba07a39ef.slice/crio-9563ab4cff4b0b4d1e2633c963796dda5774521f8911ab6fe72ebf62e65e8c8d WatchSource:0}: Error finding container 9563ab4cff4b0b4d1e2633c963796dda5774521f8911ab6fe72ebf62e65e8c8d: Status 404 returned error can't find the container with id 9563ab4cff4b0b4d1e2633c963796dda5774521f8911ab6fe72ebf62e65e8c8d Feb 16 12:53:22 crc kubenswrapper[4799]: I0216 12:53:22.205920 4799 generic.go:334] "Generic (PLEG): container finished" podID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerID="29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639" exitCode=0 Feb 16 12:53:22 crc kubenswrapper[4799]: I0216 12:53:22.206011 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerDied","Data":"29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639"} Feb 16 12:53:22 crc kubenswrapper[4799]: I0216 12:53:22.209406 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.189752 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89824920-bcd3-4640-b27b-68554fad00bb" path="/var/lib/kubelet/pods/89824920-bcd3-4640-b27b-68554fad00bb/volumes" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.225855 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a15e35f6-4998-4a70-9f95-272ba07a39ef","Type":"ContainerStarted","Data":"d84eca7023e7a1fedc539ef69bf148a2f351a556fe8d7f7c7abd1067b36c8b84"} Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.225911 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a15e35f6-4998-4a70-9f95-272ba07a39ef","Type":"ContainerStarted","Data":"9563ab4cff4b0b4d1e2633c963796dda5774521f8911ab6fe72ebf62e65e8c8d"} Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.254997 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.2549740160000002 podStartE2EDuration="2.254974016s" podCreationTimestamp="2026-02-16 12:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:23.243352973 +0000 UTC m=+1308.836368317" watchObservedRunningTime="2026-02-16 12:53:23.254974016 +0000 UTC m=+1308.847989350" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.577006 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b5wng"] Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.577826 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.578656 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.598213 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kclq8" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.598503 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.598641 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.612776 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b5wng"] Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.687261 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-scripts\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.687665 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkzbg\" (UniqueName: \"kubernetes.io/projected/4d3e4608-cd26-490c-b994-45e90311e4bc-kube-api-access-jkzbg\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.687768 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-config-data\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.687870 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.789830 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkzbg\" (UniqueName: \"kubernetes.io/projected/4d3e4608-cd26-490c-b994-45e90311e4bc-kube-api-access-jkzbg\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.789941 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-config-data\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.789985 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.790182 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-scripts\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.797121 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.797211 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-scripts\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.811557 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-config-data\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.829716 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkzbg\" (UniqueName: \"kubernetes.io/projected/4d3e4608-cd26-490c-b994-45e90311e4bc-kube-api-access-jkzbg\") pod \"nova-cell0-conductor-db-sync-b5wng\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:23 crc kubenswrapper[4799]: I0216 12:53:23.901044 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:24 crc kubenswrapper[4799]: I0216 12:53:24.425886 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b5wng"] Feb 16 12:53:24 crc kubenswrapper[4799]: W0216 12:53:24.429578 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d3e4608_cd26_490c_b994_45e90311e4bc.slice/crio-8ddf912aa8742d55751636e8ed160dad17328f62d1ea875aa02f1a64598fcaf3 WatchSource:0}: Error finding container 8ddf912aa8742d55751636e8ed160dad17328f62d1ea875aa02f1a64598fcaf3: Status 404 returned error can't find the container with id 8ddf912aa8742d55751636e8ed160dad17328f62d1ea875aa02f1a64598fcaf3 Feb 16 12:53:24 crc kubenswrapper[4799]: I0216 12:53:24.432196 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:53:25 crc kubenswrapper[4799]: I0216 12:53:25.251857 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b5wng" event={"ID":"4d3e4608-cd26-490c-b994-45e90311e4bc","Type":"ContainerStarted","Data":"8ddf912aa8742d55751636e8ed160dad17328f62d1ea875aa02f1a64598fcaf3"} Feb 16 12:53:31 crc kubenswrapper[4799]: I0216 12:53:31.653524 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:31 crc kubenswrapper[4799]: I0216 12:53:31.685453 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:32 crc kubenswrapper[4799]: I0216 12:53:32.335571 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:32 crc kubenswrapper[4799]: I0216 12:53:32.371585 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 12:53:33 crc kubenswrapper[4799]: I0216 12:53:33.345323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b5wng" event={"ID":"4d3e4608-cd26-490c-b994-45e90311e4bc","Type":"ContainerStarted","Data":"91e98d674aebef321eba251d510b930aafdb3f18c0abb8d9dd6e2ca492bb134d"} Feb 16 12:53:33 crc kubenswrapper[4799]: I0216 12:53:33.365494 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-b5wng" podStartSLOduration=2.454144038 podStartE2EDuration="10.365470541s" podCreationTimestamp="2026-02-16 12:53:23 +0000 UTC" firstStartedPulling="2026-02-16 12:53:24.431881013 +0000 UTC m=+1310.024896357" lastFinishedPulling="2026-02-16 12:53:32.343207526 +0000 UTC m=+1317.936222860" observedRunningTime="2026-02-16 12:53:33.360842898 +0000 UTC m=+1318.953858232" watchObservedRunningTime="2026-02-16 12:53:33.365470541 +0000 UTC m=+1318.958485875" Feb 16 12:53:34 crc kubenswrapper[4799]: I0216 12:53:34.514266 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.214051 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6830c16d_ca25_432a_a879_c7e5cb64c593.slice/crio-conmon-0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.416877 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.458996 4799 generic.go:334] "Generic (PLEG): container finished" podID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerID="0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c" exitCode=137 Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.459046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerDied","Data":"0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c"} Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.459075 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6830c16d-ca25-432a-a879-c7e5cb64c593","Type":"ContainerDied","Data":"e55a72f9b1c0822f4fe7b95cc772e5680b00855633454f63f14200ae2c5d38da"} Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.459091 4799 scope.go:117] "RemoveContainer" containerID="0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.459258 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.477821 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-combined-ca-bundle\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.477925 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm7xm\" (UniqueName: \"kubernetes.io/projected/6830c16d-ca25-432a-a879-c7e5cb64c593-kube-api-access-bm7xm\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.477981 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-config-data\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.478037 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-scripts\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.478076 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-sg-core-conf-yaml\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.478103 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-run-httpd\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.478319 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-log-httpd\") pod \"6830c16d-ca25-432a-a879-c7e5cb64c593\" (UID: \"6830c16d-ca25-432a-a879-c7e5cb64c593\") " Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.479261 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.479781 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.485400 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-scripts" (OuterVolumeSpecName: "scripts") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.488334 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6830c16d-ca25-432a-a879-c7e5cb64c593-kube-api-access-bm7xm" (OuterVolumeSpecName: "kube-api-access-bm7xm") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "kube-api-access-bm7xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.513791 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.543347 4799 scope.go:117] "RemoveContainer" containerID="e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.574927 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.580496 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.580528 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.580538 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.580546 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6830c16d-ca25-432a-a879-c7e5cb64c593-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.580554 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.580565 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm7xm\" (UniqueName: \"kubernetes.io/projected/6830c16d-ca25-432a-a879-c7e5cb64c593-kube-api-access-bm7xm\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.618670 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-config-data" (OuterVolumeSpecName: "config-data") pod "6830c16d-ca25-432a-a879-c7e5cb64c593" (UID: "6830c16d-ca25-432a-a879-c7e5cb64c593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.682801 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6830c16d-ca25-432a-a879-c7e5cb64c593-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.687589 4799 scope.go:117] "RemoveContainer" containerID="ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.739354 4799 scope.go:117] "RemoveContainer" containerID="29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.809734 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.836083 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.841486 4799 scope.go:117] "RemoveContainer" containerID="0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.842011 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c\": container with ID starting with 0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c not found: ID does not exist" containerID="0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.842051 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c"} err="failed to get container status \"0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c\": rpc error: code = NotFound desc = could not find container \"0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c\": container with ID starting with 0cceb58ffafd9750e8e7b9cf24fca08d68ce24732b8a065de8011bcb93730a5c not found: ID does not exist" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.842076 4799 scope.go:117] "RemoveContainer" containerID="e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.845217 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f\": container with ID starting with e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f not found: ID does not exist" containerID="e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.845246 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f"} err="failed to get container status \"e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f\": rpc error: code = NotFound desc = could not find container \"e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f\": container with ID starting with e67f0fe69aec2d050b6d01e1749889fc454777891917c3b2407f82eb5595e18f not found: ID does not exist" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.845266 4799 scope.go:117] "RemoveContainer" containerID="ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.848588 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06\": container with ID starting with ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06 not found: ID does not exist" containerID="ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.848626 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06"} err="failed to get container status \"ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06\": rpc error: code = NotFound desc = could not find container \"ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06\": container with ID starting with ed5c6c432e01fdd0c3cb76abcb151a11e51b3148854b4e0deacf8efdb5a82f06 not found: ID does not exist" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.848650 4799 scope.go:117] "RemoveContainer" containerID="29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.855852 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.856385 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-central-agent" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856403 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-central-agent" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.856413 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="sg-core" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856421 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="sg-core" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.856431 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="proxy-httpd" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856439 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="proxy-httpd" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.856472 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-notification-agent" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856481 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-notification-agent" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856717 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="sg-core" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856737 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="89824920-bcd3-4640-b27b-68554fad00bb" containerName="watcher-decision-engine" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856762 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-central-agent" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856781 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="proxy-httpd" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.856792 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" containerName="ceilometer-notification-agent" Feb 16 12:53:40 crc kubenswrapper[4799]: E0216 12:53:40.857443 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639\": container with ID starting with 29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639 not found: ID does not exist" containerID="29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.857683 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639"} err="failed to get container status \"29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639\": rpc error: code = NotFound desc = could not find container \"29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639\": container with ID starting with 29a35107c2da43fe950fe8bf7b46a92a595aa0f9b724e83b290fd1e9c79ab639 not found: ID does not exist" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.858925 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.862021 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.862239 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.871723 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.990065 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-log-httpd\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.990605 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-scripts\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.990830 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.991043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-config-data\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.991307 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2tw\" (UniqueName: \"kubernetes.io/projected/6401f6c7-d00e-4a76-b542-4e817c8e049a-kube-api-access-jb2tw\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.991374 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:40 crc kubenswrapper[4799]: I0216 12:53:40.991527 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-run-httpd\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093568 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-scripts\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093632 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-config-data\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093701 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2tw\" (UniqueName: \"kubernetes.io/projected/6401f6c7-d00e-4a76-b542-4e817c8e049a-kube-api-access-jb2tw\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093721 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093762 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-run-httpd\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.093830 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-log-httpd\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.094546 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-log-httpd\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.094870 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-run-httpd\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.099873 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-scripts\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.102466 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.103012 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.112855 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-config-data\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.124081 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2tw\" (UniqueName: \"kubernetes.io/projected/6401f6c7-d00e-4a76-b542-4e817c8e049a-kube-api-access-jb2tw\") pod \"ceilometer-0\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.161624 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6830c16d-ca25-432a-a879-c7e5cb64c593" path="/var/lib/kubelet/pods/6830c16d-ca25-432a-a879-c7e5cb64c593/volumes" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.184528 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:53:41 crc kubenswrapper[4799]: I0216 12:53:41.692655 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:53:41 crc kubenswrapper[4799]: W0216 12:53:41.701023 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6401f6c7_d00e_4a76_b542_4e817c8e049a.slice/crio-44cc40cb819a28e5247ba4d1500b96f9d1ea5c3533d0d6357405c9fdbc0a853d WatchSource:0}: Error finding container 44cc40cb819a28e5247ba4d1500b96f9d1ea5c3533d0d6357405c9fdbc0a853d: Status 404 returned error can't find the container with id 44cc40cb819a28e5247ba4d1500b96f9d1ea5c3533d0d6357405c9fdbc0a853d Feb 16 12:53:42 crc kubenswrapper[4799]: I0216 12:53:42.484489 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerStarted","Data":"6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1"} Feb 16 12:53:42 crc kubenswrapper[4799]: I0216 12:53:42.485057 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerStarted","Data":"1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec"} Feb 16 12:53:42 crc kubenswrapper[4799]: I0216 12:53:42.485074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerStarted","Data":"44cc40cb819a28e5247ba4d1500b96f9d1ea5c3533d0d6357405c9fdbc0a853d"} Feb 16 12:53:43 crc kubenswrapper[4799]: I0216 12:53:43.494427 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerStarted","Data":"a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e"} Feb 16 12:53:45 crc kubenswrapper[4799]: I0216 12:53:45.517222 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerStarted","Data":"b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022"} Feb 16 12:53:45 crc kubenswrapper[4799]: I0216 12:53:45.517455 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 12:53:45 crc kubenswrapper[4799]: I0216 12:53:45.549324 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.744084649 podStartE2EDuration="5.549303238s" podCreationTimestamp="2026-02-16 12:53:40 +0000 UTC" firstStartedPulling="2026-02-16 12:53:41.703632675 +0000 UTC m=+1327.296648009" lastFinishedPulling="2026-02-16 12:53:44.508851264 +0000 UTC m=+1330.101866598" observedRunningTime="2026-02-16 12:53:45.544077609 +0000 UTC m=+1331.137092943" watchObservedRunningTime="2026-02-16 12:53:45.549303238 +0000 UTC m=+1331.142318572" Feb 16 12:53:49 crc kubenswrapper[4799]: I0216 12:53:49.555236 4799 generic.go:334] "Generic (PLEG): container finished" podID="4d3e4608-cd26-490c-b994-45e90311e4bc" containerID="91e98d674aebef321eba251d510b930aafdb3f18c0abb8d9dd6e2ca492bb134d" exitCode=0 Feb 16 12:53:49 crc kubenswrapper[4799]: I0216 12:53:49.555318 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b5wng" event={"ID":"4d3e4608-cd26-490c-b994-45e90311e4bc","Type":"ContainerDied","Data":"91e98d674aebef321eba251d510b930aafdb3f18c0abb8d9dd6e2ca492bb134d"} Feb 16 12:53:50 crc kubenswrapper[4799]: I0216 12:53:50.975089 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.004950 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-config-data\") pod \"4d3e4608-cd26-490c-b994-45e90311e4bc\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.005020 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-scripts\") pod \"4d3e4608-cd26-490c-b994-45e90311e4bc\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.005089 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkzbg\" (UniqueName: \"kubernetes.io/projected/4d3e4608-cd26-490c-b994-45e90311e4bc-kube-api-access-jkzbg\") pod \"4d3e4608-cd26-490c-b994-45e90311e4bc\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.005157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-combined-ca-bundle\") pod \"4d3e4608-cd26-490c-b994-45e90311e4bc\" (UID: \"4d3e4608-cd26-490c-b994-45e90311e4bc\") " Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.011819 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-scripts" (OuterVolumeSpecName: "scripts") pod "4d3e4608-cd26-490c-b994-45e90311e4bc" (UID: "4d3e4608-cd26-490c-b994-45e90311e4bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.016723 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3e4608-cd26-490c-b994-45e90311e4bc-kube-api-access-jkzbg" (OuterVolumeSpecName: "kube-api-access-jkzbg") pod "4d3e4608-cd26-490c-b994-45e90311e4bc" (UID: "4d3e4608-cd26-490c-b994-45e90311e4bc"). InnerVolumeSpecName "kube-api-access-jkzbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.039355 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-config-data" (OuterVolumeSpecName: "config-data") pod "4d3e4608-cd26-490c-b994-45e90311e4bc" (UID: "4d3e4608-cd26-490c-b994-45e90311e4bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.046299 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d3e4608-cd26-490c-b994-45e90311e4bc" (UID: "4d3e4608-cd26-490c-b994-45e90311e4bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.107962 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.108053 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.108062 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3e4608-cd26-490c-b994-45e90311e4bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.108080 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkzbg\" (UniqueName: \"kubernetes.io/projected/4d3e4608-cd26-490c-b994-45e90311e4bc-kube-api-access-jkzbg\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.578544 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-b5wng" event={"ID":"4d3e4608-cd26-490c-b994-45e90311e4bc","Type":"ContainerDied","Data":"8ddf912aa8742d55751636e8ed160dad17328f62d1ea875aa02f1a64598fcaf3"} Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.579199 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ddf912aa8742d55751636e8ed160dad17328f62d1ea875aa02f1a64598fcaf3" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.578611 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-b5wng" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.698380 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 12:53:51 crc kubenswrapper[4799]: E0216 12:53:51.698922 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3e4608-cd26-490c-b994-45e90311e4bc" containerName="nova-cell0-conductor-db-sync" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.698958 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3e4608-cd26-490c-b994-45e90311e4bc" containerName="nova-cell0-conductor-db-sync" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.699197 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3e4608-cd26-490c-b994-45e90311e4bc" containerName="nova-cell0-conductor-db-sync" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.700093 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.705760 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.706168 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kclq8" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.719914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.719988 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rrb\" (UniqueName: \"kubernetes.io/projected/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-kube-api-access-b5rrb\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.720109 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.740176 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.822439 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.822516 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5rrb\" (UniqueName: \"kubernetes.io/projected/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-kube-api-access-b5rrb\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.822626 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.828638 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.829352 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:51 crc kubenswrapper[4799]: I0216 12:53:51.843114 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5rrb\" (UniqueName: \"kubernetes.io/projected/ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6-kube-api-access-b5rrb\") pod \"nova-cell0-conductor-0\" (UID: \"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6\") " pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:52 crc kubenswrapper[4799]: I0216 12:53:52.037709 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:52 crc kubenswrapper[4799]: I0216 12:53:52.555646 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 12:53:53 crc kubenswrapper[4799]: I0216 12:53:53.600757 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6","Type":"ContainerStarted","Data":"755e1cf0b4d738ab6db39965b59f7581412fcb02cba7ef70787310c4b3efe64e"} Feb 16 12:53:53 crc kubenswrapper[4799]: I0216 12:53:53.601460 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6","Type":"ContainerStarted","Data":"019c944cdc5413ac6ffa6b7edaa4c14f1a578e780aeb3e4385747b1a144099b6"} Feb 16 12:53:53 crc kubenswrapper[4799]: I0216 12:53:53.601480 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:53 crc kubenswrapper[4799]: I0216 12:53:53.621888 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.621867064 podStartE2EDuration="2.621867064s" podCreationTimestamp="2026-02-16 12:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:53.615592064 +0000 UTC m=+1339.208607388" watchObservedRunningTime="2026-02-16 12:53:53.621867064 +0000 UTC m=+1339.214882398" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.070653 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.568070 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-swpq5"] Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.570507 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.575522 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.577072 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.582357 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-swpq5"] Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.745329 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-config-data\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.745380 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7bn\" (UniqueName: \"kubernetes.io/projected/26e37ea2-a3b0-43c1-94d4-c545edaed454-kube-api-access-vk7bn\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.745436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.745522 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-scripts\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.847256 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-scripts\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.847382 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-config-data\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.847414 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7bn\" (UniqueName: \"kubernetes.io/projected/26e37ea2-a3b0-43c1-94d4-c545edaed454-kube-api-access-vk7bn\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.847480 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.856970 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-config-data\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.866003 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.876677 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-scripts\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.927425 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7bn\" (UniqueName: \"kubernetes.io/projected/26e37ea2-a3b0-43c1-94d4-c545edaed454-kube-api-access-vk7bn\") pod \"nova-cell0-cell-mapping-swpq5\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.946420 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.974963 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:53:57 crc kubenswrapper[4799]: I0216 12:53:57.978447 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.006754 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.035543 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.046663 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.065650 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.066215 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.098763 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.151971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-config-data\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152014 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbq2\" (UniqueName: \"kubernetes.io/projected/e2832cde-7564-4af6-8e09-b66142ab6c27-kube-api-access-pqbq2\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152071 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2832cde-7564-4af6-8e09-b66142ab6c27-logs\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152116 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152252 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ecff6a-6cbd-4171-8bde-f10826eddb30-logs\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152283 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-config-data\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152320 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9bmf\" (UniqueName: \"kubernetes.io/projected/56ecff6a-6cbd-4171-8bde-f10826eddb30-kube-api-access-f9bmf\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.152341 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.253666 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2832cde-7564-4af6-8e09-b66142ab6c27-logs\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.253790 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.253911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ecff6a-6cbd-4171-8bde-f10826eddb30-logs\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.253941 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-config-data\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.253987 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9bmf\" (UniqueName: \"kubernetes.io/projected/56ecff6a-6cbd-4171-8bde-f10826eddb30-kube-api-access-f9bmf\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.254016 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.254108 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-config-data\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.254155 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbq2\" (UniqueName: \"kubernetes.io/projected/e2832cde-7564-4af6-8e09-b66142ab6c27-kube-api-access-pqbq2\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.255023 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2832cde-7564-4af6-8e09-b66142ab6c27-logs\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.255167 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ecff6a-6cbd-4171-8bde-f10826eddb30-logs\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.268507 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.270635 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.277143 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-config-data\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.294024 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.295758 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-config-data\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.308768 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.312603 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.368566 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-config-data\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.368715 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwkm\" (UniqueName: \"kubernetes.io/projected/97f78257-8fee-47e4-86dd-072411c9895d-kube-api-access-ldwkm\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.368952 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.369293 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.380433 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqbq2\" (UniqueName: \"kubernetes.io/projected/e2832cde-7564-4af6-8e09-b66142ab6c27-kube-api-access-pqbq2\") pod \"nova-metadata-0\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.401070 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9bmf\" (UniqueName: \"kubernetes.io/projected/56ecff6a-6cbd-4171-8bde-f10826eddb30-kube-api-access-f9bmf\") pod \"nova-api-0\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.417192 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb95f7db7-lrdp9"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.423765 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.440806 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.459448 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb95f7db7-lrdp9"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.471400 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwkm\" (UniqueName: \"kubernetes.io/projected/97f78257-8fee-47e4-86dd-072411c9895d-kube-api-access-ldwkm\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.471534 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.471643 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-config-data\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.486711 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.487374 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.489581 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-config-data\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.505456 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwkm\" (UniqueName: \"kubernetes.io/projected/97f78257-8fee-47e4-86dd-072411c9895d-kube-api-access-ldwkm\") pod \"nova-scheduler-0\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.533564 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.534989 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.543598 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.565402 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.574603 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-sb\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.574687 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-swift-storage-0\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.574734 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-config\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.574778 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgk42\" (UniqueName: \"kubernetes.io/projected/edbeb15e-e56a-4311-82a6-71f46a0b81d8-kube-api-access-kgk42\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.574823 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-svc\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.574904 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-nb\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.677643 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.678415 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-config\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.678729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-config\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.678846 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgk42\" (UniqueName: \"kubernetes.io/projected/edbeb15e-e56a-4311-82a6-71f46a0b81d8-kube-api-access-kgk42\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.678965 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-svc\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.679242 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8x6f\" (UniqueName: \"kubernetes.io/projected/82882565-4fa8-4300-9cb0-e66837c374aa-kube-api-access-n8x6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.679408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-nb\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.679526 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.679581 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.680520 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-svc\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.681157 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-nb\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.681247 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-sb\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.681320 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-swift-storage-0\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.682162 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-swift-storage-0\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.682936 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-sb\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.707891 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgk42\" (UniqueName: \"kubernetes.io/projected/edbeb15e-e56a-4311-82a6-71f46a0b81d8-kube-api-access-kgk42\") pod \"dnsmasq-dns-bb95f7db7-lrdp9\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.784369 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8x6f\" (UniqueName: \"kubernetes.io/projected/82882565-4fa8-4300-9cb0-e66837c374aa-kube-api-access-n8x6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.784444 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.784470 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.790110 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.791620 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.796641 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.811242 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8x6f\" (UniqueName: \"kubernetes.io/projected/82882565-4fa8-4300-9cb0-e66837c374aa-kube-api-access-n8x6f\") pod \"nova-cell1-novncproxy-0\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.881137 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-swpq5"] Feb 16 12:53:58 crc kubenswrapper[4799]: I0216 12:53:58.890877 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.062621 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.220393 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:53:59 crc kubenswrapper[4799]: W0216 12:53:59.248424 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56ecff6a_6cbd_4171_8bde_f10826eddb30.slice/crio-2e1fbdf3952b35f698f6a78946c025b85981b726295b711daf16b836445e74bf WatchSource:0}: Error finding container 2e1fbdf3952b35f698f6a78946c025b85981b726295b711daf16b836445e74bf: Status 404 returned error can't find the container with id 2e1fbdf3952b35f698f6a78946c025b85981b726295b711daf16b836445e74bf Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.257745 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqjts"] Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.283097 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.291569 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.309002 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.335891 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqjts"] Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.379363 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.413440 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-scripts\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.413667 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.413846 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-config-data\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.413870 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmctw\" (UniqueName: \"kubernetes.io/projected/37caa4cf-2608-483b-a75d-eb94ae2d41f5-kube-api-access-vmctw\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.516585 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-config-data\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.516644 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmctw\" (UniqueName: \"kubernetes.io/projected/37caa4cf-2608-483b-a75d-eb94ae2d41f5-kube-api-access-vmctw\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.516679 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-scripts\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.516707 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.522834 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-scripts\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.523144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-config-data\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.523189 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.539795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmctw\" (UniqueName: \"kubernetes.io/projected/37caa4cf-2608-483b-a75d-eb94ae2d41f5-kube-api-access-vmctw\") pod \"nova-cell1-conductor-db-sync-bqjts\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.603139 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.608585 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.614365 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb95f7db7-lrdp9"] Feb 16 12:53:59 crc kubenswrapper[4799]: W0216 12:53:59.638001 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedbeb15e_e56a_4311_82a6_71f46a0b81d8.slice/crio-6992f80d9e0b12225314c9c6811903a3e76816e775815ac3e5baf21b6efb8f22 WatchSource:0}: Error finding container 6992f80d9e0b12225314c9c6811903a3e76816e775815ac3e5baf21b6efb8f22: Status 404 returned error can't find the container with id 6992f80d9e0b12225314c9c6811903a3e76816e775815ac3e5baf21b6efb8f22 Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.676936 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"97f78257-8fee-47e4-86dd-072411c9895d","Type":"ContainerStarted","Data":"1d0db6bfefe858fa9fe48395ddc47ef9fb96177e187c124f6d8e1ba54c22dc68"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.679849 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2832cde-7564-4af6-8e09-b66142ab6c27","Type":"ContainerStarted","Data":"276ee8fe91dcb28a813ce15436cc86d307c3116c3689f8b49fdf24d433f0b93d"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.681617 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82882565-4fa8-4300-9cb0-e66837c374aa","Type":"ContainerStarted","Data":"f542a094425b91b87767b2eb937c7805d24e89da1ceb811f7d4444b0df44d293"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.684429 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" event={"ID":"edbeb15e-e56a-4311-82a6-71f46a0b81d8","Type":"ContainerStarted","Data":"6992f80d9e0b12225314c9c6811903a3e76816e775815ac3e5baf21b6efb8f22"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.692003 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-swpq5" event={"ID":"26e37ea2-a3b0-43c1-94d4-c545edaed454","Type":"ContainerStarted","Data":"93903097425ced611a882b483272217a0be1281f1517258e6ec2023ff409e261"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.692058 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-swpq5" event={"ID":"26e37ea2-a3b0-43c1-94d4-c545edaed454","Type":"ContainerStarted","Data":"e100fa49a4e2776ea5068f5bc5181c2174b7582a62da63ff0343633a9d550922"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.693777 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ecff6a-6cbd-4171-8bde-f10826eddb30","Type":"ContainerStarted","Data":"2e1fbdf3952b35f698f6a78946c025b85981b726295b711daf16b836445e74bf"} Feb 16 12:53:59 crc kubenswrapper[4799]: I0216 12:53:59.719805 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-swpq5" podStartSLOduration=2.71978289 podStartE2EDuration="2.71978289s" podCreationTimestamp="2026-02-16 12:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:53:59.712499552 +0000 UTC m=+1345.305514886" watchObservedRunningTime="2026-02-16 12:53:59.71978289 +0000 UTC m=+1345.312798224" Feb 16 12:54:00 crc kubenswrapper[4799]: I0216 12:54:00.190891 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqjts"] Feb 16 12:54:00 crc kubenswrapper[4799]: I0216 12:54:00.731284 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqjts" event={"ID":"37caa4cf-2608-483b-a75d-eb94ae2d41f5","Type":"ContainerStarted","Data":"29d2a78e03cc80a9595d47c98436406d72247db0518515fa307fd0042b96b1c7"} Feb 16 12:54:00 crc kubenswrapper[4799]: I0216 12:54:00.733467 4799 generic.go:334] "Generic (PLEG): container finished" podID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerID="114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3" exitCode=0 Feb 16 12:54:00 crc kubenswrapper[4799]: I0216 12:54:00.734878 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" event={"ID":"edbeb15e-e56a-4311-82a6-71f46a0b81d8","Type":"ContainerDied","Data":"114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3"} Feb 16 12:54:01 crc kubenswrapper[4799]: I0216 12:54:01.744481 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqjts" event={"ID":"37caa4cf-2608-483b-a75d-eb94ae2d41f5","Type":"ContainerStarted","Data":"8b75db812a15f28ecefb79692e06e0d8edec74be2fce361e62c0f7079d9581e3"} Feb 16 12:54:01 crc kubenswrapper[4799]: I0216 12:54:01.747740 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" event={"ID":"edbeb15e-e56a-4311-82a6-71f46a0b81d8","Type":"ContainerStarted","Data":"d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab"} Feb 16 12:54:01 crc kubenswrapper[4799]: I0216 12:54:01.748756 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:54:01 crc kubenswrapper[4799]: I0216 12:54:01.785294 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bqjts" podStartSLOduration=2.785271288 podStartE2EDuration="2.785271288s" podCreationTimestamp="2026-02-16 12:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:01.774181751 +0000 UTC m=+1347.367197085" watchObservedRunningTime="2026-02-16 12:54:01.785271288 +0000 UTC m=+1347.378286672" Feb 16 12:54:01 crc kubenswrapper[4799]: I0216 12:54:01.802217 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" podStartSLOduration=3.802193672 podStartE2EDuration="3.802193672s" podCreationTimestamp="2026-02-16 12:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:01.792433423 +0000 UTC m=+1347.385448757" watchObservedRunningTime="2026-02-16 12:54:01.802193672 +0000 UTC m=+1347.395209006" Feb 16 12:54:02 crc kubenswrapper[4799]: I0216 12:54:02.267543 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:02 crc kubenswrapper[4799]: I0216 12:54:02.340254 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.802430 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ecff6a-6cbd-4171-8bde-f10826eddb30","Type":"ContainerStarted","Data":"5744f1e8b7d32106d0369a8cbbed17dc08b6bf4c4297d574f1a2f543c499e1c0"} Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.802983 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ecff6a-6cbd-4171-8bde-f10826eddb30","Type":"ContainerStarted","Data":"81db254e0f5fd7265851d5026dedc1d6d9f6727bc6e729012c8b1c3c67928625"} Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.806326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"97f78257-8fee-47e4-86dd-072411c9895d","Type":"ContainerStarted","Data":"1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6"} Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.809565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2832cde-7564-4af6-8e09-b66142ab6c27","Type":"ContainerStarted","Data":"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436"} Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.809620 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2832cde-7564-4af6-8e09-b66142ab6c27","Type":"ContainerStarted","Data":"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c"} Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.809757 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-log" containerID="cri-o://5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c" gracePeriod=30 Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.813349 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-metadata" containerID="cri-o://3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436" gracePeriod=30 Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.822372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82882565-4fa8-4300-9cb0-e66837c374aa","Type":"ContainerStarted","Data":"550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1"} Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.822591 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="82882565-4fa8-4300-9cb0-e66837c374aa" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1" gracePeriod=30 Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.835724 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.646904743 podStartE2EDuration="9.835693428s" podCreationTimestamp="2026-02-16 12:53:57 +0000 UTC" firstStartedPulling="2026-02-16 12:53:59.251308388 +0000 UTC m=+1344.844323722" lastFinishedPulling="2026-02-16 12:54:05.440097073 +0000 UTC m=+1351.033112407" observedRunningTime="2026-02-16 12:54:06.823295803 +0000 UTC m=+1352.416311147" watchObservedRunningTime="2026-02-16 12:54:06.835693428 +0000 UTC m=+1352.428708802" Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.851493 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.797789999 podStartE2EDuration="8.851470599s" podCreationTimestamp="2026-02-16 12:53:58 +0000 UTC" firstStartedPulling="2026-02-16 12:53:59.381454881 +0000 UTC m=+1344.974470205" lastFinishedPulling="2026-02-16 12:54:05.435135471 +0000 UTC m=+1351.028150805" observedRunningTime="2026-02-16 12:54:06.849586675 +0000 UTC m=+1352.442602009" watchObservedRunningTime="2026-02-16 12:54:06.851470599 +0000 UTC m=+1352.444485933" Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.872974 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.097262127 podStartE2EDuration="8.872950504s" podCreationTimestamp="2026-02-16 12:53:58 +0000 UTC" firstStartedPulling="2026-02-16 12:53:59.659399523 +0000 UTC m=+1345.252414847" lastFinishedPulling="2026-02-16 12:54:05.43508789 +0000 UTC m=+1351.028103224" observedRunningTime="2026-02-16 12:54:06.867049815 +0000 UTC m=+1352.460065149" watchObservedRunningTime="2026-02-16 12:54:06.872950504 +0000 UTC m=+1352.465965838" Feb 16 12:54:06 crc kubenswrapper[4799]: I0216 12:54:06.890083 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.56954816 podStartE2EDuration="9.890056873s" podCreationTimestamp="2026-02-16 12:53:57 +0000 UTC" firstStartedPulling="2026-02-16 12:53:59.114581027 +0000 UTC m=+1344.707596371" lastFinishedPulling="2026-02-16 12:54:05.43508975 +0000 UTC m=+1351.028105084" observedRunningTime="2026-02-16 12:54:06.889615091 +0000 UTC m=+1352.482630435" watchObservedRunningTime="2026-02-16 12:54:06.890056873 +0000 UTC m=+1352.483072207" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.547444 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.663628 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-combined-ca-bundle\") pod \"e2832cde-7564-4af6-8e09-b66142ab6c27\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.663872 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqbq2\" (UniqueName: \"kubernetes.io/projected/e2832cde-7564-4af6-8e09-b66142ab6c27-kube-api-access-pqbq2\") pod \"e2832cde-7564-4af6-8e09-b66142ab6c27\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.663915 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-config-data\") pod \"e2832cde-7564-4af6-8e09-b66142ab6c27\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.663950 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2832cde-7564-4af6-8e09-b66142ab6c27-logs\") pod \"e2832cde-7564-4af6-8e09-b66142ab6c27\" (UID: \"e2832cde-7564-4af6-8e09-b66142ab6c27\") " Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.664517 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2832cde-7564-4af6-8e09-b66142ab6c27-logs" (OuterVolumeSpecName: "logs") pod "e2832cde-7564-4af6-8e09-b66142ab6c27" (UID: "e2832cde-7564-4af6-8e09-b66142ab6c27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.685349 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2832cde-7564-4af6-8e09-b66142ab6c27-kube-api-access-pqbq2" (OuterVolumeSpecName: "kube-api-access-pqbq2") pod "e2832cde-7564-4af6-8e09-b66142ab6c27" (UID: "e2832cde-7564-4af6-8e09-b66142ab6c27"). InnerVolumeSpecName "kube-api-access-pqbq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.701511 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2832cde-7564-4af6-8e09-b66142ab6c27" (UID: "e2832cde-7564-4af6-8e09-b66142ab6c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.708615 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-config-data" (OuterVolumeSpecName: "config-data") pod "e2832cde-7564-4af6-8e09-b66142ab6c27" (UID: "e2832cde-7564-4af6-8e09-b66142ab6c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.767185 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqbq2\" (UniqueName: \"kubernetes.io/projected/e2832cde-7564-4af6-8e09-b66142ab6c27-kube-api-access-pqbq2\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.767232 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.767252 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2832cde-7564-4af6-8e09-b66142ab6c27-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.767264 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2832cde-7564-4af6-8e09-b66142ab6c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.837946 4799 generic.go:334] "Generic (PLEG): container finished" podID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerID="3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436" exitCode=0 Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.837986 4799 generic.go:334] "Generic (PLEG): container finished" podID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerID="5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c" exitCode=143 Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.838018 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.838049 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2832cde-7564-4af6-8e09-b66142ab6c27","Type":"ContainerDied","Data":"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436"} Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.838102 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2832cde-7564-4af6-8e09-b66142ab6c27","Type":"ContainerDied","Data":"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c"} Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.838155 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2832cde-7564-4af6-8e09-b66142ab6c27","Type":"ContainerDied","Data":"276ee8fe91dcb28a813ce15436cc86d307c3116c3689f8b49fdf24d433f0b93d"} Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.838155 4799 scope.go:117] "RemoveContainer" containerID="3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.866007 4799 scope.go:117] "RemoveContainer" containerID="5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.912058 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.919107 4799 scope.go:117] "RemoveContainer" containerID="3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436" Feb 16 12:54:07 crc kubenswrapper[4799]: E0216 12:54:07.919633 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436\": container with ID starting with 3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436 not found: ID does not exist" containerID="3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.919682 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436"} err="failed to get container status \"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436\": rpc error: code = NotFound desc = could not find container \"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436\": container with ID starting with 3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436 not found: ID does not exist" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.919712 4799 scope.go:117] "RemoveContainer" containerID="5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c" Feb 16 12:54:07 crc kubenswrapper[4799]: E0216 12:54:07.932923 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c\": container with ID starting with 5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c not found: ID does not exist" containerID="5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.932980 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c"} err="failed to get container status \"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c\": rpc error: code = NotFound desc = could not find container \"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c\": container with ID starting with 5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c not found: ID does not exist" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.933012 4799 scope.go:117] "RemoveContainer" containerID="3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.936372 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436"} err="failed to get container status \"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436\": rpc error: code = NotFound desc = could not find container \"3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436\": container with ID starting with 3ee128477d480d0c10d461b39759abdd55760ea4a0f2801f1c923d6de21d7436 not found: ID does not exist" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.936397 4799 scope.go:117] "RemoveContainer" containerID="5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.936924 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c"} err="failed to get container status \"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c\": rpc error: code = NotFound desc = could not find container \"5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c\": container with ID starting with 5d7c3bd2753c40dd5d35efeca3307ce49bf953fd031e0d713eef7207c842466c not found: ID does not exist" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.945656 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.959394 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:07 crc kubenswrapper[4799]: E0216 12:54:07.959930 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-log" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.959963 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-log" Feb 16 12:54:07 crc kubenswrapper[4799]: E0216 12:54:07.959982 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-metadata" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.959992 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-metadata" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.960312 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-metadata" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.960347 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" containerName="nova-metadata-log" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.961827 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.963723 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.964194 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.972593 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.978777 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.978826 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmt5\" (UniqueName: \"kubernetes.io/projected/56341bd8-834c-411d-9e21-e9b78f312c0f-kube-api-access-wsmt5\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.978857 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56341bd8-834c-411d-9e21-e9b78f312c0f-logs\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.978893 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-config-data\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:07 crc kubenswrapper[4799]: I0216 12:54:07.979033 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.080617 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.081039 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.081071 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmt5\" (UniqueName: \"kubernetes.io/projected/56341bd8-834c-411d-9e21-e9b78f312c0f-kube-api-access-wsmt5\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.081103 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56341bd8-834c-411d-9e21-e9b78f312c0f-logs\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.081213 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-config-data\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.082548 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56341bd8-834c-411d-9e21-e9b78f312c0f-logs\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.086902 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-config-data\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.087278 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.088659 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.102269 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmt5\" (UniqueName: \"kubernetes.io/projected/56341bd8-834c-411d-9e21-e9b78f312c0f-kube-api-access-wsmt5\") pod \"nova-metadata-0\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.284913 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.488467 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.489858 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.679335 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.679405 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.739641 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.753298 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.799298 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.869218 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56341bd8-834c-411d-9e21-e9b78f312c0f","Type":"ContainerStarted","Data":"ea182bb34842a91d9b41ad6487fbaf33ef088645dd0f1e3edcf0e039a5f59716"} Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.887537 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d546d59d7-9lr8f"] Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.887794 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerName="dnsmasq-dns" containerID="cri-o://33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f" gracePeriod=10 Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.892796 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:08 crc kubenswrapper[4799]: I0216 12:54:08.934915 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.174478 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2832cde-7564-4af6-8e09-b66142ab6c27" path="/var/lib/kubelet/pods/e2832cde-7564-4af6-8e09-b66142ab6c27/volumes" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.487985 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.551211 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-config\") pod \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.551538 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-nb\") pod \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.551669 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-swift-storage-0\") pod \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.551780 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tfff\" (UniqueName: \"kubernetes.io/projected/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-kube-api-access-4tfff\") pod \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.551948 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-sb\") pod \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.552083 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-svc\") pod \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\" (UID: \"86fd8d1c-0696-41e9-a6f9-53efb050f0ce\") " Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.572388 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-kube-api-access-4tfff" (OuterVolumeSpecName: "kube-api-access-4tfff") pod "86fd8d1c-0696-41e9-a6f9-53efb050f0ce" (UID: "86fd8d1c-0696-41e9-a6f9-53efb050f0ce"). InnerVolumeSpecName "kube-api-access-4tfff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.572611 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.572895 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.616806 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86fd8d1c-0696-41e9-a6f9-53efb050f0ce" (UID: "86fd8d1c-0696-41e9-a6f9-53efb050f0ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.616844 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-config" (OuterVolumeSpecName: "config") pod "86fd8d1c-0696-41e9-a6f9-53efb050f0ce" (UID: "86fd8d1c-0696-41e9-a6f9-53efb050f0ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.626451 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86fd8d1c-0696-41e9-a6f9-53efb050f0ce" (UID: "86fd8d1c-0696-41e9-a6f9-53efb050f0ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.627776 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86fd8d1c-0696-41e9-a6f9-53efb050f0ce" (UID: "86fd8d1c-0696-41e9-a6f9-53efb050f0ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.668704 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.668739 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.668752 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tfff\" (UniqueName: \"kubernetes.io/projected/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-kube-api-access-4tfff\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.668762 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.668774 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.678653 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "86fd8d1c-0696-41e9-a6f9-53efb050f0ce" (UID: "86fd8d1c-0696-41e9-a6f9-53efb050f0ce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.770580 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86fd8d1c-0696-41e9-a6f9-53efb050f0ce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.881843 4799 generic.go:334] "Generic (PLEG): container finished" podID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerID="33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f" exitCode=0 Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.883155 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" event={"ID":"86fd8d1c-0696-41e9-a6f9-53efb050f0ce","Type":"ContainerDied","Data":"33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f"} Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.883287 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" event={"ID":"86fd8d1c-0696-41e9-a6f9-53efb050f0ce","Type":"ContainerDied","Data":"7b8b631582cb45509ac8d1eb578b210dfd0294deb71d48c98618eb02ae68347b"} Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.883437 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d546d59d7-9lr8f" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.883450 4799 scope.go:117] "RemoveContainer" containerID="33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.887656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56341bd8-834c-411d-9e21-e9b78f312c0f","Type":"ContainerStarted","Data":"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b"} Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.887708 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56341bd8-834c-411d-9e21-e9b78f312c0f","Type":"ContainerStarted","Data":"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da"} Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.911103 4799 scope.go:117] "RemoveContainer" containerID="3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.914558 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.914536736 podStartE2EDuration="2.914536736s" podCreationTimestamp="2026-02-16 12:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:09.908867603 +0000 UTC m=+1355.501882937" watchObservedRunningTime="2026-02-16 12:54:09.914536736 +0000 UTC m=+1355.507552070" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.936251 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d546d59d7-9lr8f"] Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.946875 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d546d59d7-9lr8f"] Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.947644 4799 scope.go:117] "RemoveContainer" containerID="33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f" Feb 16 12:54:09 crc kubenswrapper[4799]: E0216 12:54:09.948154 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f\": container with ID starting with 33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f not found: ID does not exist" containerID="33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.948197 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f"} err="failed to get container status \"33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f\": rpc error: code = NotFound desc = could not find container \"33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f\": container with ID starting with 33c757d253afe716d341422b11f089377fe7eaa92c2f456cf08f53c96cb8505f not found: ID does not exist" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.948222 4799 scope.go:117] "RemoveContainer" containerID="3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782" Feb 16 12:54:09 crc kubenswrapper[4799]: E0216 12:54:09.948885 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782\": container with ID starting with 3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782 not found: ID does not exist" containerID="3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782" Feb 16 12:54:09 crc kubenswrapper[4799]: I0216 12:54:09.948912 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782"} err="failed to get container status \"3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782\": rpc error: code = NotFound desc = could not find container \"3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782\": container with ID starting with 3a1562992ddcc35f26b09cf021ddf16f29390b90ea74e64fc47f9dec9b2b1782 not found: ID does not exist" Feb 16 12:54:11 crc kubenswrapper[4799]: I0216 12:54:11.175732 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" path="/var/lib/kubelet/pods/86fd8d1c-0696-41e9-a6f9-53efb050f0ce/volumes" Feb 16 12:54:11 crc kubenswrapper[4799]: I0216 12:54:11.201091 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 12:54:11 crc kubenswrapper[4799]: I0216 12:54:11.912443 4799 generic.go:334] "Generic (PLEG): container finished" podID="26e37ea2-a3b0-43c1-94d4-c545edaed454" containerID="93903097425ced611a882b483272217a0be1281f1517258e6ec2023ff409e261" exitCode=0 Feb 16 12:54:11 crc kubenswrapper[4799]: I0216 12:54:11.912526 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-swpq5" event={"ID":"26e37ea2-a3b0-43c1-94d4-c545edaed454","Type":"ContainerDied","Data":"93903097425ced611a882b483272217a0be1281f1517258e6ec2023ff409e261"} Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.285433 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.286211 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.329813 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.443911 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk7bn\" (UniqueName: \"kubernetes.io/projected/26e37ea2-a3b0-43c1-94d4-c545edaed454-kube-api-access-vk7bn\") pod \"26e37ea2-a3b0-43c1-94d4-c545edaed454\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.444054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-config-data\") pod \"26e37ea2-a3b0-43c1-94d4-c545edaed454\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.444220 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-combined-ca-bundle\") pod \"26e37ea2-a3b0-43c1-94d4-c545edaed454\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.444257 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-scripts\") pod \"26e37ea2-a3b0-43c1-94d4-c545edaed454\" (UID: \"26e37ea2-a3b0-43c1-94d4-c545edaed454\") " Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.454371 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e37ea2-a3b0-43c1-94d4-c545edaed454-kube-api-access-vk7bn" (OuterVolumeSpecName: "kube-api-access-vk7bn") pod "26e37ea2-a3b0-43c1-94d4-c545edaed454" (UID: "26e37ea2-a3b0-43c1-94d4-c545edaed454"). InnerVolumeSpecName "kube-api-access-vk7bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.456428 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-scripts" (OuterVolumeSpecName: "scripts") pod "26e37ea2-a3b0-43c1-94d4-c545edaed454" (UID: "26e37ea2-a3b0-43c1-94d4-c545edaed454"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.486891 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-config-data" (OuterVolumeSpecName: "config-data") pod "26e37ea2-a3b0-43c1-94d4-c545edaed454" (UID: "26e37ea2-a3b0-43c1-94d4-c545edaed454"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.502822 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e37ea2-a3b0-43c1-94d4-c545edaed454" (UID: "26e37ea2-a3b0-43c1-94d4-c545edaed454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.546738 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.547046 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.547058 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e37ea2-a3b0-43c1-94d4-c545edaed454-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.547067 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk7bn\" (UniqueName: \"kubernetes.io/projected/26e37ea2-a3b0-43c1-94d4-c545edaed454-kube-api-access-vk7bn\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.932357 4799 generic.go:334] "Generic (PLEG): container finished" podID="37caa4cf-2608-483b-a75d-eb94ae2d41f5" containerID="8b75db812a15f28ecefb79692e06e0d8edec74be2fce361e62c0f7079d9581e3" exitCode=0 Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.932458 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqjts" event={"ID":"37caa4cf-2608-483b-a75d-eb94ae2d41f5","Type":"ContainerDied","Data":"8b75db812a15f28ecefb79692e06e0d8edec74be2fce361e62c0f7079d9581e3"} Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.934476 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-swpq5" event={"ID":"26e37ea2-a3b0-43c1-94d4-c545edaed454","Type":"ContainerDied","Data":"e100fa49a4e2776ea5068f5bc5181c2174b7582a62da63ff0343633a9d550922"} Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.934522 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e100fa49a4e2776ea5068f5bc5181c2174b7582a62da63ff0343633a9d550922" Feb 16 12:54:13 crc kubenswrapper[4799]: I0216 12:54:13.934785 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-swpq5" Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.125146 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.125598 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-api" containerID="cri-o://5744f1e8b7d32106d0369a8cbbed17dc08b6bf4c4297d574f1a2f543c499e1c0" gracePeriod=30 Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.125912 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-log" containerID="cri-o://81db254e0f5fd7265851d5026dedc1d6d9f6727bc6e729012c8b1c3c67928625" gracePeriod=30 Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.145092 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.145371 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="97f78257-8fee-47e4-86dd-072411c9895d" containerName="nova-scheduler-scheduler" containerID="cri-o://1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" gracePeriod=30 Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.180910 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.913064 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.913676 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="05acd04d-4502-4380-be32-5997bb43cc76" containerName="kube-state-metrics" containerID="cri-o://ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783" gracePeriod=30 Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.948945 4799 generic.go:334] "Generic (PLEG): container finished" podID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerID="81db254e0f5fd7265851d5026dedc1d6d9f6727bc6e729012c8b1c3c67928625" exitCode=143 Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.949267 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-log" containerID="cri-o://5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da" gracePeriod=30 Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.949613 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ecff6a-6cbd-4171-8bde-f10826eddb30","Type":"ContainerDied","Data":"81db254e0f5fd7265851d5026dedc1d6d9f6727bc6e729012c8b1c3c67928625"} Feb 16 12:54:14 crc kubenswrapper[4799]: I0216 12:54:14.950186 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-metadata" containerID="cri-o://dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b" gracePeriod=30 Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.568318 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.691437 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-config-data\") pod \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.691539 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-scripts\") pod \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.691559 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmctw\" (UniqueName: \"kubernetes.io/projected/37caa4cf-2608-483b-a75d-eb94ae2d41f5-kube-api-access-vmctw\") pod \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.691626 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-combined-ca-bundle\") pod \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\" (UID: \"37caa4cf-2608-483b-a75d-eb94ae2d41f5\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.707275 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-scripts" (OuterVolumeSpecName: "scripts") pod "37caa4cf-2608-483b-a75d-eb94ae2d41f5" (UID: "37caa4cf-2608-483b-a75d-eb94ae2d41f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.713459 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37caa4cf-2608-483b-a75d-eb94ae2d41f5-kube-api-access-vmctw" (OuterVolumeSpecName: "kube-api-access-vmctw") pod "37caa4cf-2608-483b-a75d-eb94ae2d41f5" (UID: "37caa4cf-2608-483b-a75d-eb94ae2d41f5"). InnerVolumeSpecName "kube-api-access-vmctw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.732303 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-config-data" (OuterVolumeSpecName: "config-data") pod "37caa4cf-2608-483b-a75d-eb94ae2d41f5" (UID: "37caa4cf-2608-483b-a75d-eb94ae2d41f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.742579 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37caa4cf-2608-483b-a75d-eb94ae2d41f5" (UID: "37caa4cf-2608-483b-a75d-eb94ae2d41f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.794719 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmctw\" (UniqueName: \"kubernetes.io/projected/37caa4cf-2608-483b-a75d-eb94ae2d41f5-kube-api-access-vmctw\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.794769 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.794784 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.794793 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37caa4cf-2608-483b-a75d-eb94ae2d41f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.835219 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.852266 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.895972 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-combined-ca-bundle\") pod \"56341bd8-834c-411d-9e21-e9b78f312c0f\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.896022 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56341bd8-834c-411d-9e21-e9b78f312c0f-logs\") pod \"56341bd8-834c-411d-9e21-e9b78f312c0f\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.896209 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-nova-metadata-tls-certs\") pod \"56341bd8-834c-411d-9e21-e9b78f312c0f\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.896338 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd8th\" (UniqueName: \"kubernetes.io/projected/05acd04d-4502-4380-be32-5997bb43cc76-kube-api-access-sd8th\") pod \"05acd04d-4502-4380-be32-5997bb43cc76\" (UID: \"05acd04d-4502-4380-be32-5997bb43cc76\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.896406 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-config-data\") pod \"56341bd8-834c-411d-9e21-e9b78f312c0f\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.896467 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsmt5\" (UniqueName: \"kubernetes.io/projected/56341bd8-834c-411d-9e21-e9b78f312c0f-kube-api-access-wsmt5\") pod \"56341bd8-834c-411d-9e21-e9b78f312c0f\" (UID: \"56341bd8-834c-411d-9e21-e9b78f312c0f\") " Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.897413 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56341bd8-834c-411d-9e21-e9b78f312c0f-logs" (OuterVolumeSpecName: "logs") pod "56341bd8-834c-411d-9e21-e9b78f312c0f" (UID: "56341bd8-834c-411d-9e21-e9b78f312c0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.897583 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56341bd8-834c-411d-9e21-e9b78f312c0f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.905486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05acd04d-4502-4380-be32-5997bb43cc76-kube-api-access-sd8th" (OuterVolumeSpecName: "kube-api-access-sd8th") pod "05acd04d-4502-4380-be32-5997bb43cc76" (UID: "05acd04d-4502-4380-be32-5997bb43cc76"). InnerVolumeSpecName "kube-api-access-sd8th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.905597 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56341bd8-834c-411d-9e21-e9b78f312c0f-kube-api-access-wsmt5" (OuterVolumeSpecName: "kube-api-access-wsmt5") pod "56341bd8-834c-411d-9e21-e9b78f312c0f" (UID: "56341bd8-834c-411d-9e21-e9b78f312c0f"). InnerVolumeSpecName "kube-api-access-wsmt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.927358 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56341bd8-834c-411d-9e21-e9b78f312c0f" (UID: "56341bd8-834c-411d-9e21-e9b78f312c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.944246 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-config-data" (OuterVolumeSpecName: "config-data") pod "56341bd8-834c-411d-9e21-e9b78f312c0f" (UID: "56341bd8-834c-411d-9e21-e9b78f312c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.986096 4799 generic.go:334] "Generic (PLEG): container finished" podID="05acd04d-4502-4380-be32-5997bb43cc76" containerID="ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783" exitCode=2 Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.986201 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.986203 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05acd04d-4502-4380-be32-5997bb43cc76","Type":"ContainerDied","Data":"ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783"} Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.986349 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05acd04d-4502-4380-be32-5997bb43cc76","Type":"ContainerDied","Data":"11d8b5775a250fb2c3a8ced1a49186a0a3f721dce612c3d9ef329ba7787e5b34"} Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.986370 4799 scope.go:117] "RemoveContainer" containerID="ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.996182 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "56341bd8-834c-411d-9e21-e9b78f312c0f" (UID: "56341bd8-834c-411d-9e21-e9b78f312c0f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.997526 4799 generic.go:334] "Generic (PLEG): container finished" podID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerID="5744f1e8b7d32106d0369a8cbbed17dc08b6bf4c4297d574f1a2f543c499e1c0" exitCode=0 Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.997600 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ecff6a-6cbd-4171-8bde-f10826eddb30","Type":"ContainerDied","Data":"5744f1e8b7d32106d0369a8cbbed17dc08b6bf4c4297d574f1a2f543c499e1c0"} Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.999054 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd8th\" (UniqueName: \"kubernetes.io/projected/05acd04d-4502-4380-be32-5997bb43cc76-kube-api-access-sd8th\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.999085 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.999097 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsmt5\" (UniqueName: \"kubernetes.io/projected/56341bd8-834c-411d-9e21-e9b78f312c0f-kube-api-access-wsmt5\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.999109 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:15 crc kubenswrapper[4799]: I0216 12:54:15.999135 4799 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56341bd8-834c-411d-9e21-e9b78f312c0f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.002927 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqjts" event={"ID":"37caa4cf-2608-483b-a75d-eb94ae2d41f5","Type":"ContainerDied","Data":"29d2a78e03cc80a9595d47c98436406d72247db0518515fa307fd0042b96b1c7"} Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.002971 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d2a78e03cc80a9595d47c98436406d72247db0518515fa307fd0042b96b1c7" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.003028 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqjts" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.006514 4799 generic.go:334] "Generic (PLEG): container finished" podID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerID="dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b" exitCode=0 Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.006581 4799 generic.go:334] "Generic (PLEG): container finished" podID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerID="5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da" exitCode=143 Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.006610 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56341bd8-834c-411d-9e21-e9b78f312c0f","Type":"ContainerDied","Data":"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b"} Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.006749 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56341bd8-834c-411d-9e21-e9b78f312c0f","Type":"ContainerDied","Data":"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da"} Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.006776 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56341bd8-834c-411d-9e21-e9b78f312c0f","Type":"ContainerDied","Data":"ea182bb34842a91d9b41ad6487fbaf33ef088645dd0f1e3edcf0e039a5f59716"} Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.006828 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.066578 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067064 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37caa4cf-2608-483b-a75d-eb94ae2d41f5" containerName="nova-cell1-conductor-db-sync" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067091 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37caa4cf-2608-483b-a75d-eb94ae2d41f5" containerName="nova-cell1-conductor-db-sync" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067101 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-log" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067107 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-log" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067137 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-metadata" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067144 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-metadata" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067160 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e37ea2-a3b0-43c1-94d4-c545edaed454" containerName="nova-manage" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067167 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e37ea2-a3b0-43c1-94d4-c545edaed454" containerName="nova-manage" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067175 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05acd04d-4502-4380-be32-5997bb43cc76" containerName="kube-state-metrics" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067183 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="05acd04d-4502-4380-be32-5997bb43cc76" containerName="kube-state-metrics" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067214 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerName="init" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067220 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerName="init" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.067235 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerName="dnsmasq-dns" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067242 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerName="dnsmasq-dns" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067439 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e37ea2-a3b0-43c1-94d4-c545edaed454" containerName="nova-manage" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067463 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-log" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067471 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="05acd04d-4502-4380-be32-5997bb43cc76" containerName="kube-state-metrics" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067482 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="37caa4cf-2608-483b-a75d-eb94ae2d41f5" containerName="nova-cell1-conductor-db-sync" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067492 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" containerName="nova-metadata-metadata" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.067509 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fd8d1c-0696-41e9-a6f9-53efb050f0ce" containerName="dnsmasq-dns" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.070382 4799 scope.go:117] "RemoveContainer" containerID="ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.071433 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783\": container with ID starting with ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783 not found: ID does not exist" containerID="ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.071487 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783"} err="failed to get container status \"ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783\": rpc error: code = NotFound desc = could not find container \"ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783\": container with ID starting with ff15e89d6a2b3694eb76194cbe64202e524a0042d86c6fbba5241c3dfa4d0783 not found: ID does not exist" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.071522 4799 scope.go:117] "RemoveContainer" containerID="dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.072635 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.075974 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.084148 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.102161 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.116955 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.119744 4799 scope.go:117] "RemoveContainer" containerID="5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.130566 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.133240 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.139201 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.139384 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.184254 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.187452 4799 scope.go:117] "RemoveContainer" containerID="dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.187993 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b\": container with ID starting with dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b not found: ID does not exist" containerID="dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.188027 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b"} err="failed to get container status \"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b\": rpc error: code = NotFound desc = could not find container \"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b\": container with ID starting with dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b not found: ID does not exist" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.188060 4799 scope.go:117] "RemoveContainer" containerID="5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da" Feb 16 12:54:16 crc kubenswrapper[4799]: E0216 12:54:16.188777 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da\": container with ID starting with 5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da not found: ID does not exist" containerID="5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.188820 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da"} err="failed to get container status \"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da\": rpc error: code = NotFound desc = could not find container \"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da\": container with ID starting with 5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da not found: ID does not exist" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.188846 4799 scope.go:117] "RemoveContainer" containerID="dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.193229 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b"} err="failed to get container status \"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b\": rpc error: code = NotFound desc = could not find container \"dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b\": container with ID starting with dc3be9573f360751530588832bc9bc8446a08655c96af31bf59dd0fa36cfe14b not found: ID does not exist" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.193273 4799 scope.go:117] "RemoveContainer" containerID="5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.198068 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203014 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47764882-7881-4fbd-b682-c75a79736dea-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203094 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203179 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203256 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdt2s\" (UniqueName: \"kubernetes.io/projected/47764882-7881-4fbd-b682-c75a79736dea-kube-api-access-zdt2s\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203303 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqpl\" (UniqueName: \"kubernetes.io/projected/11134cac-9930-424d-8a67-69a6ba98ff21-kube-api-access-7wqpl\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203413 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47764882-7881-4fbd-b682-c75a79736dea-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203484 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.203736 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da"} err="failed to get container status \"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da\": rpc error: code = NotFound desc = could not find container \"5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da\": container with ID starting with 5ef692f47820f80463f0f66a53065314720e53ef52e0b27bc007e8c1687030da not found: ID does not exist" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.223614 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.247513 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.250139 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.253581 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.256091 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.256346 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.292737 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.305350 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-config-data\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.305601 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47764882-7881-4fbd-b682-c75a79736dea-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.305682 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-logs\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.305724 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.305878 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.305996 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47764882-7881-4fbd-b682-c75a79736dea-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.306049 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.306094 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.306233 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.306298 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wb5s\" (UniqueName: \"kubernetes.io/projected/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-kube-api-access-5wb5s\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.306377 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdt2s\" (UniqueName: \"kubernetes.io/projected/47764882-7881-4fbd-b682-c75a79736dea-kube-api-access-zdt2s\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.306434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wqpl\" (UniqueName: \"kubernetes.io/projected/11134cac-9930-424d-8a67-69a6ba98ff21-kube-api-access-7wqpl\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.314268 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.315841 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.316092 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/11134cac-9930-424d-8a67-69a6ba98ff21-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.319390 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47764882-7881-4fbd-b682-c75a79736dea-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.324943 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdt2s\" (UniqueName: \"kubernetes.io/projected/47764882-7881-4fbd-b682-c75a79736dea-kube-api-access-zdt2s\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.325907 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47764882-7881-4fbd-b682-c75a79736dea-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47764882-7881-4fbd-b682-c75a79736dea\") " pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.330820 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wqpl\" (UniqueName: \"kubernetes.io/projected/11134cac-9930-424d-8a67-69a6ba98ff21-kube-api-access-7wqpl\") pod \"kube-state-metrics-0\" (UID: \"11134cac-9930-424d-8a67-69a6ba98ff21\") " pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.407215 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.407827 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-combined-ca-bundle\") pod \"56ecff6a-6cbd-4171-8bde-f10826eddb30\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.407993 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9bmf\" (UniqueName: \"kubernetes.io/projected/56ecff6a-6cbd-4171-8bde-f10826eddb30-kube-api-access-f9bmf\") pod \"56ecff6a-6cbd-4171-8bde-f10826eddb30\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.408065 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ecff6a-6cbd-4171-8bde-f10826eddb30-logs\") pod \"56ecff6a-6cbd-4171-8bde-f10826eddb30\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.408678 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-config-data\") pod \"56ecff6a-6cbd-4171-8bde-f10826eddb30\" (UID: \"56ecff6a-6cbd-4171-8bde-f10826eddb30\") " Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.408979 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.409073 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wb5s\" (UniqueName: \"kubernetes.io/projected/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-kube-api-access-5wb5s\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.409165 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-config-data\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.409194 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-logs\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.409211 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.409557 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ecff6a-6cbd-4171-8bde-f10826eddb30-logs" (OuterVolumeSpecName: "logs") pod "56ecff6a-6cbd-4171-8bde-f10826eddb30" (UID: "56ecff6a-6cbd-4171-8bde-f10826eddb30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.413172 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-logs\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.414347 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ecff6a-6cbd-4171-8bde-f10826eddb30-kube-api-access-f9bmf" (OuterVolumeSpecName: "kube-api-access-f9bmf") pod "56ecff6a-6cbd-4171-8bde-f10826eddb30" (UID: "56ecff6a-6cbd-4171-8bde-f10826eddb30"). InnerVolumeSpecName "kube-api-access-f9bmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.416531 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-config-data\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.416870 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.420977 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.427285 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wb5s\" (UniqueName: \"kubernetes.io/projected/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-kube-api-access-5wb5s\") pod \"nova-metadata-0\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.444307 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-config-data" (OuterVolumeSpecName: "config-data") pod "56ecff6a-6cbd-4171-8bde-f10826eddb30" (UID: "56ecff6a-6cbd-4171-8bde-f10826eddb30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.456028 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56ecff6a-6cbd-4171-8bde-f10826eddb30" (UID: "56ecff6a-6cbd-4171-8bde-f10826eddb30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.482680 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.514455 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ecff6a-6cbd-4171-8bde-f10826eddb30-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.514493 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.514511 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ecff6a-6cbd-4171-8bde-f10826eddb30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.514525 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9bmf\" (UniqueName: \"kubernetes.io/projected/56ecff6a-6cbd-4171-8bde-f10826eddb30-kube-api-access-f9bmf\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.582964 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:54:16 crc kubenswrapper[4799]: I0216 12:54:16.995083 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.019758 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"47764882-7881-4fbd-b682-c75a79736dea","Type":"ContainerStarted","Data":"2d807838cc45cf9eafab8bfc3a35cbfff27c9cad9ff30ce17c361b708cc03a53"} Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.023490 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ecff6a-6cbd-4171-8bde-f10826eddb30","Type":"ContainerDied","Data":"2e1fbdf3952b35f698f6a78946c025b85981b726295b711daf16b836445e74bf"} Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.023534 4799 scope.go:117] "RemoveContainer" containerID="5744f1e8b7d32106d0369a8cbbed17dc08b6bf4c4297d574f1a2f543c499e1c0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.023703 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.100820 4799 scope.go:117] "RemoveContainer" containerID="81db254e0f5fd7265851d5026dedc1d6d9f6727bc6e729012c8b1c3c67928625" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.104496 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.132109 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.142231 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.166346 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05acd04d-4502-4380-be32-5997bb43cc76" path="/var/lib/kubelet/pods/05acd04d-4502-4380-be32-5997bb43cc76/volumes" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.167095 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56341bd8-834c-411d-9e21-e9b78f312c0f" path="/var/lib/kubelet/pods/56341bd8-834c-411d-9e21-e9b78f312c0f/volumes" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.169331 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" path="/var/lib/kubelet/pods/56ecff6a-6cbd-4171-8bde-f10826eddb30/volumes" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.170162 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: E0216 12:54:17.170620 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-api" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.170649 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-api" Feb 16 12:54:17 crc kubenswrapper[4799]: E0216 12:54:17.170683 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-log" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.170693 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-log" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.170925 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-log" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.170968 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ecff6a-6cbd-4171-8bde-f10826eddb30" containerName="nova-api-api" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.173330 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.173438 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.175924 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.231143 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.233731 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxcz\" (UniqueName: \"kubernetes.io/projected/b7f2eb3b-99ec-4288-bfee-a86318a69f79-kube-api-access-wzxcz\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.233900 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.233932 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f2eb3b-99ec-4288-bfee-a86318a69f79-logs\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.233995 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-config-data\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: W0216 12:54:17.242197 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e2b3fcf_00ec_4d11_9d47_b1aeb9b33a01.slice/crio-5df315f7cfcd6b1318a440fe9885035989f1dc72d2ca4e89c9c6e42590a5876c WatchSource:0}: Error finding container 5df315f7cfcd6b1318a440fe9885035989f1dc72d2ca4e89c9c6e42590a5876c: Status 404 returned error can't find the container with id 5df315f7cfcd6b1318a440fe9885035989f1dc72d2ca4e89c9c6e42590a5876c Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.336195 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxcz\" (UniqueName: \"kubernetes.io/projected/b7f2eb3b-99ec-4288-bfee-a86318a69f79-kube-api-access-wzxcz\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.336317 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.336338 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f2eb3b-99ec-4288-bfee-a86318a69f79-logs\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.336378 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-config-data\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.337282 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f2eb3b-99ec-4288-bfee-a86318a69f79-logs\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.352088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.352341 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-config-data\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.357811 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxcz\" (UniqueName: \"kubernetes.io/projected/b7f2eb3b-99ec-4288-bfee-a86318a69f79-kube-api-access-wzxcz\") pod \"nova-api-0\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.512021 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.637546 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.637895 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-central-agent" containerID="cri-o://1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec" gracePeriod=30 Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.639993 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="proxy-httpd" containerID="cri-o://b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022" gracePeriod=30 Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.640061 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-notification-agent" containerID="cri-o://6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1" gracePeriod=30 Feb 16 12:54:17 crc kubenswrapper[4799]: I0216 12:54:17.640245 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="sg-core" containerID="cri-o://a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e" gracePeriod=30 Feb 16 12:54:18 crc kubenswrapper[4799]: W0216 12:54:18.051277 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7f2eb3b_99ec_4288_bfee_a86318a69f79.slice/crio-d5f919c726281e18337ae2c2a3f06899dbf3fdbabe8d5032991c7ae54ac92dc1 WatchSource:0}: Error finding container d5f919c726281e18337ae2c2a3f06899dbf3fdbabe8d5032991c7ae54ac92dc1: Status 404 returned error can't find the container with id d5f919c726281e18337ae2c2a3f06899dbf3fdbabe8d5032991c7ae54ac92dc1 Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.052190 4799 generic.go:334] "Generic (PLEG): container finished" podID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerID="b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022" exitCode=0 Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.052222 4799 generic.go:334] "Generic (PLEG): container finished" podID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerID="a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e" exitCode=2 Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.052286 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerDied","Data":"b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.052350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerDied","Data":"a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.054345 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.054791 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11134cac-9930-424d-8a67-69a6ba98ff21","Type":"ContainerStarted","Data":"5b8c958a42b01efcf35fbdd7b6e622b784ba119555543000d3858b50bab1a24e"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.054841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11134cac-9930-424d-8a67-69a6ba98ff21","Type":"ContainerStarted","Data":"3dbb9b795f4fce305be8e6cc8ada4b8455719707fdcf43a7405f5fbe99fb7368"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.054935 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.058110 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"47764882-7881-4fbd-b682-c75a79736dea","Type":"ContainerStarted","Data":"f1b02cf693fed607d395e823e78d83f5947481cff73d2064ace988e0ae2b4708"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.058340 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.061520 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01","Type":"ContainerStarted","Data":"973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.061569 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01","Type":"ContainerStarted","Data":"ed39cac0c1fdaf791042a69457f1ac122032a9b62b834185ad2e362cbf11b0ff"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.061583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01","Type":"ContainerStarted","Data":"5df315f7cfcd6b1318a440fe9885035989f1dc72d2ca4e89c9c6e42590a5876c"} Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.087518 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.484282796 podStartE2EDuration="2.087495152s" podCreationTimestamp="2026-02-16 12:54:16 +0000 UTC" firstStartedPulling="2026-02-16 12:54:17.12016223 +0000 UTC m=+1362.713177564" lastFinishedPulling="2026-02-16 12:54:17.723374586 +0000 UTC m=+1363.316389920" observedRunningTime="2026-02-16 12:54:18.077161567 +0000 UTC m=+1363.670176931" watchObservedRunningTime="2026-02-16 12:54:18.087495152 +0000 UTC m=+1363.680510486" Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.130172 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.130149483 podStartE2EDuration="2.130149483s" podCreationTimestamp="2026-02-16 12:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:18.096554491 +0000 UTC m=+1363.689569825" watchObservedRunningTime="2026-02-16 12:54:18.130149483 +0000 UTC m=+1363.723164817" Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.148824 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.148773605 podStartE2EDuration="2.148773605s" podCreationTimestamp="2026-02-16 12:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:18.112988912 +0000 UTC m=+1363.706004246" watchObservedRunningTime="2026-02-16 12:54:18.148773605 +0000 UTC m=+1363.741788939" Feb 16 12:54:18 crc kubenswrapper[4799]: E0216 12:54:18.679545 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6 is running failed: container process not found" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 12:54:18 crc kubenswrapper[4799]: E0216 12:54:18.683154 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6 is running failed: container process not found" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 12:54:18 crc kubenswrapper[4799]: E0216 12:54:18.684558 4799 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6 is running failed: container process not found" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 12:54:18 crc kubenswrapper[4799]: E0216 12:54:18.684594 4799 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="97f78257-8fee-47e4-86dd-072411c9895d" containerName="nova-scheduler-scheduler" Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.912560 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.991909 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-combined-ca-bundle\") pod \"97f78257-8fee-47e4-86dd-072411c9895d\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.992239 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-config-data\") pod \"97f78257-8fee-47e4-86dd-072411c9895d\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " Feb 16 12:54:18 crc kubenswrapper[4799]: I0216 12:54:18.992305 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwkm\" (UniqueName: \"kubernetes.io/projected/97f78257-8fee-47e4-86dd-072411c9895d-kube-api-access-ldwkm\") pod \"97f78257-8fee-47e4-86dd-072411c9895d\" (UID: \"97f78257-8fee-47e4-86dd-072411c9895d\") " Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.003903 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f78257-8fee-47e4-86dd-072411c9895d-kube-api-access-ldwkm" (OuterVolumeSpecName: "kube-api-access-ldwkm") pod "97f78257-8fee-47e4-86dd-072411c9895d" (UID: "97f78257-8fee-47e4-86dd-072411c9895d"). InnerVolumeSpecName "kube-api-access-ldwkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.025049 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-config-data" (OuterVolumeSpecName: "config-data") pod "97f78257-8fee-47e4-86dd-072411c9895d" (UID: "97f78257-8fee-47e4-86dd-072411c9895d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.029321 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97f78257-8fee-47e4-86dd-072411c9895d" (UID: "97f78257-8fee-47e4-86dd-072411c9895d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.080473 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7f2eb3b-99ec-4288-bfee-a86318a69f79","Type":"ContainerStarted","Data":"f02fb478a9ab493e71d605c0f0636011e2a65981a64794466b45f9981d40781d"} Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.081620 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7f2eb3b-99ec-4288-bfee-a86318a69f79","Type":"ContainerStarted","Data":"61af341e07b37d2c6cdba6f679daf0102526c4f9c1ca72250052fda0cccaddca"} Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.081738 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7f2eb3b-99ec-4288-bfee-a86318a69f79","Type":"ContainerStarted","Data":"d5f919c726281e18337ae2c2a3f06899dbf3fdbabe8d5032991c7ae54ac92dc1"} Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.090308 4799 generic.go:334] "Generic (PLEG): container finished" podID="97f78257-8fee-47e4-86dd-072411c9895d" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" exitCode=0 Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.090395 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"97f78257-8fee-47e4-86dd-072411c9895d","Type":"ContainerDied","Data":"1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6"} Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.090434 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"97f78257-8fee-47e4-86dd-072411c9895d","Type":"ContainerDied","Data":"1d0db6bfefe858fa9fe48395ddc47ef9fb96177e187c124f6d8e1ba54c22dc68"} Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.090463 4799 scope.go:117] "RemoveContainer" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.090637 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.097177 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.097255 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldwkm\" (UniqueName: \"kubernetes.io/projected/97f78257-8fee-47e4-86dd-072411c9895d-kube-api-access-ldwkm\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.097266 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f78257-8fee-47e4-86dd-072411c9895d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.100758 4799 generic.go:334] "Generic (PLEG): container finished" podID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerID="1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec" exitCode=0 Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.101953 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerDied","Data":"1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec"} Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.148867 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.148846585 podStartE2EDuration="2.148846585s" podCreationTimestamp="2026-02-16 12:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:19.11826753 +0000 UTC m=+1364.711282874" watchObservedRunningTime="2026-02-16 12:54:19.148846585 +0000 UTC m=+1364.741861919" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.157917 4799 scope.go:117] "RemoveContainer" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" Feb 16 12:54:19 crc kubenswrapper[4799]: E0216 12:54:19.158459 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6\": container with ID starting with 1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6 not found: ID does not exist" containerID="1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.158596 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6"} err="failed to get container status \"1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6\": rpc error: code = NotFound desc = could not find container \"1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6\": container with ID starting with 1300df3ed32ad3a499d2b1c3c7d3a162b53dc29260a263f89a0f4d3a57ba6ce6 not found: ID does not exist" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.173217 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.191245 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.202768 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:54:19 crc kubenswrapper[4799]: E0216 12:54:19.203407 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f78257-8fee-47e4-86dd-072411c9895d" containerName="nova-scheduler-scheduler" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.203428 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f78257-8fee-47e4-86dd-072411c9895d" containerName="nova-scheduler-scheduler" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.203623 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f78257-8fee-47e4-86dd-072411c9895d" containerName="nova-scheduler-scheduler" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.204411 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.206640 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.215625 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.303581 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.303941 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88p9q\" (UniqueName: \"kubernetes.io/projected/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-kube-api-access-88p9q\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.304007 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-config-data\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.406377 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88p9q\" (UniqueName: \"kubernetes.io/projected/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-kube-api-access-88p9q\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.406458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-config-data\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.406505 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.411735 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-config-data\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.411871 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.423943 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88p9q\" (UniqueName: \"kubernetes.io/projected/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-kube-api-access-88p9q\") pod \"nova-scheduler-0\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " pod="openstack/nova-scheduler-0" Feb 16 12:54:19 crc kubenswrapper[4799]: I0216 12:54:19.527555 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:54:20 crc kubenswrapper[4799]: W0216 12:54:20.062549 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf52827a2_06e0_4f60_ac3d_2efdc2b182d4.slice/crio-59fc67425abf43ae3e73d8453b71e080ec79795ccd33ccbb8f8d0a1ab4ae4b7f WatchSource:0}: Error finding container 59fc67425abf43ae3e73d8453b71e080ec79795ccd33ccbb8f8d0a1ab4ae4b7f: Status 404 returned error can't find the container with id 59fc67425abf43ae3e73d8453b71e080ec79795ccd33ccbb8f8d0a1ab4ae4b7f Feb 16 12:54:20 crc kubenswrapper[4799]: I0216 12:54:20.065400 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:54:20 crc kubenswrapper[4799]: I0216 12:54:20.114372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f52827a2-06e0-4f60-ac3d-2efdc2b182d4","Type":"ContainerStarted","Data":"59fc67425abf43ae3e73d8453b71e080ec79795ccd33ccbb8f8d0a1ab4ae4b7f"} Feb 16 12:54:21 crc kubenswrapper[4799]: I0216 12:54:21.135547 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f52827a2-06e0-4f60-ac3d-2efdc2b182d4","Type":"ContainerStarted","Data":"426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2"} Feb 16 12:54:21 crc kubenswrapper[4799]: I0216 12:54:21.165398 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f78257-8fee-47e4-86dd-072411c9895d" path="/var/lib/kubelet/pods/97f78257-8fee-47e4-86dd-072411c9895d/volumes" Feb 16 12:54:21 crc kubenswrapper[4799]: I0216 12:54:21.180631 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.1806038389999998 podStartE2EDuration="2.180603839s" podCreationTimestamp="2026-02-16 12:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:21.168179414 +0000 UTC m=+1366.761194778" watchObservedRunningTime="2026-02-16 12:54:21.180603839 +0000 UTC m=+1366.773619193" Feb 16 12:54:21 crc kubenswrapper[4799]: I0216 12:54:21.588109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 12:54:21 crc kubenswrapper[4799]: I0216 12:54:21.588184 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 12:54:24 crc kubenswrapper[4799]: I0216 12:54:24.527623 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 12:54:24 crc kubenswrapper[4799]: I0216 12:54:24.962866 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.064086 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-log-httpd\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.064699 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.064890 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-config-data\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.065264 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.065797 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-run-httpd\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.065842 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-combined-ca-bundle\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.066081 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-sg-core-conf-yaml\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.066147 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-scripts\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.066186 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb2tw\" (UniqueName: \"kubernetes.io/projected/6401f6c7-d00e-4a76-b542-4e817c8e049a-kube-api-access-jb2tw\") pod \"6401f6c7-d00e-4a76-b542-4e817c8e049a\" (UID: \"6401f6c7-d00e-4a76-b542-4e817c8e049a\") " Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.067201 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.067253 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6401f6c7-d00e-4a76-b542-4e817c8e049a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.080478 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6401f6c7-d00e-4a76-b542-4e817c8e049a-kube-api-access-jb2tw" (OuterVolumeSpecName: "kube-api-access-jb2tw") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "kube-api-access-jb2tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.080493 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-scripts" (OuterVolumeSpecName: "scripts") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.097299 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.152691 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.169735 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.169770 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.169785 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.169795 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb2tw\" (UniqueName: \"kubernetes.io/projected/6401f6c7-d00e-4a76-b542-4e817c8e049a-kube-api-access-jb2tw\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.175482 4799 generic.go:334] "Generic (PLEG): container finished" podID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerID="6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1" exitCode=0 Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.177988 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.179650 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-config-data" (OuterVolumeSpecName: "config-data") pod "6401f6c7-d00e-4a76-b542-4e817c8e049a" (UID: "6401f6c7-d00e-4a76-b542-4e817c8e049a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.207306 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerDied","Data":"6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1"} Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.207360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6401f6c7-d00e-4a76-b542-4e817c8e049a","Type":"ContainerDied","Data":"44cc40cb819a28e5247ba4d1500b96f9d1ea5c3533d0d6357405c9fdbc0a853d"} Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.207378 4799 scope.go:117] "RemoveContainer" containerID="b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.233971 4799 scope.go:117] "RemoveContainer" containerID="a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.256759 4799 scope.go:117] "RemoveContainer" containerID="6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.271837 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6401f6c7-d00e-4a76-b542-4e817c8e049a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.276187 4799 scope.go:117] "RemoveContainer" containerID="1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.297470 4799 scope.go:117] "RemoveContainer" containerID="b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.297918 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022\": container with ID starting with b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022 not found: ID does not exist" containerID="b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.297970 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022"} err="failed to get container status \"b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022\": rpc error: code = NotFound desc = could not find container \"b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022\": container with ID starting with b31aaf8f1f8d84d2dcb8e6813f7d47ea308cd74188c27a30322bed4b50456022 not found: ID does not exist" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.298005 4799 scope.go:117] "RemoveContainer" containerID="a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.298407 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e\": container with ID starting with a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e not found: ID does not exist" containerID="a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.298427 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e"} err="failed to get container status \"a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e\": rpc error: code = NotFound desc = could not find container \"a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e\": container with ID starting with a8b4cc7df9d70d3cdf62f4f1753db5c76d29b122c5a9b4869bccebdbec23ba7e not found: ID does not exist" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.298441 4799 scope.go:117] "RemoveContainer" containerID="6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.298679 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1\": container with ID starting with 6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1 not found: ID does not exist" containerID="6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.298698 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1"} err="failed to get container status \"6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1\": rpc error: code = NotFound desc = could not find container \"6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1\": container with ID starting with 6bd273d1047ac8e6da66443698be0f0fe5f83a9f4e037ae570cc52f9a7ff0ed1 not found: ID does not exist" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.298709 4799 scope.go:117] "RemoveContainer" containerID="1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.298894 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec\": container with ID starting with 1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec not found: ID does not exist" containerID="1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.298911 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec"} err="failed to get container status \"1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec\": rpc error: code = NotFound desc = could not find container \"1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec\": container with ID starting with 1928d4e230284e9266d75136a2a7706a6ba51c2b7b35c3442e9a594003575eec not found: ID does not exist" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.528708 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.554319 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.574376 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.575644 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="proxy-httpd" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.575676 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="proxy-httpd" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.575720 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-central-agent" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.575729 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-central-agent" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.575757 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-notification-agent" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.575766 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-notification-agent" Feb 16 12:54:25 crc kubenswrapper[4799]: E0216 12:54:25.575796 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="sg-core" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.575805 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="sg-core" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.576342 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-notification-agent" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.576381 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="sg-core" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.576407 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="proxy-httpd" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.576438 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" containerName="ceilometer-central-agent" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.581200 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.583796 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.585424 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.585793 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.607456 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.683995 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-log-httpd\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684069 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-run-httpd\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684185 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-config-data\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684222 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-scripts\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684251 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vr8\" (UniqueName: \"kubernetes.io/projected/2736a891-3240-4fa6-beb0-24e13b7fbd8c-kube-api-access-n2vr8\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684294 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.684318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786043 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-log-httpd\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786150 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786187 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-run-httpd\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786234 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-config-data\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786265 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-scripts\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786290 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vr8\" (UniqueName: \"kubernetes.io/projected/2736a891-3240-4fa6-beb0-24e13b7fbd8c-kube-api-access-n2vr8\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786322 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786650 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-log-httpd\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.786946 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-run-httpd\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.790295 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-scripts\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.790816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.791353 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.793009 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-config-data\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.794284 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.804280 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vr8\" (UniqueName: \"kubernetes.io/projected/2736a891-3240-4fa6-beb0-24e13b7fbd8c-kube-api-access-n2vr8\") pod \"ceilometer-0\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " pod="openstack/ceilometer-0" Feb 16 12:54:25 crc kubenswrapper[4799]: I0216 12:54:25.956388 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:26 crc kubenswrapper[4799]: I0216 12:54:26.437303 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:26 crc kubenswrapper[4799]: I0216 12:54:26.448242 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 12:54:26 crc kubenswrapper[4799]: I0216 12:54:26.494182 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 12:54:26 crc kubenswrapper[4799]: I0216 12:54:26.585257 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 12:54:26 crc kubenswrapper[4799]: I0216 12:54:26.585313 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.163178 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6401f6c7-d00e-4a76-b542-4e817c8e049a" path="/var/lib/kubelet/pods/6401f6c7-d00e-4a76-b542-4e817c8e049a/volumes" Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.198936 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerStarted","Data":"98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774"} Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.198994 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerStarted","Data":"f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f"} Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.199006 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerStarted","Data":"855979006a8eb749e47b5894a195e4cd7e83e2834b28b9f6b84c6ebd23475ba5"} Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.513822 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.513897 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.614383 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:27 crc kubenswrapper[4799]: I0216 12:54:27.614395 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:28 crc kubenswrapper[4799]: I0216 12:54:28.212869 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerStarted","Data":"165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7"} Feb 16 12:54:28 crc kubenswrapper[4799]: I0216 12:54:28.595405 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:28 crc kubenswrapper[4799]: I0216 12:54:28.595423 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:29 crc kubenswrapper[4799]: I0216 12:54:29.527851 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 12:54:29 crc kubenswrapper[4799]: I0216 12:54:29.565292 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 12:54:30 crc kubenswrapper[4799]: I0216 12:54:30.259528 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 12:54:31 crc kubenswrapper[4799]: I0216 12:54:31.243710 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerStarted","Data":"162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e"} Feb 16 12:54:31 crc kubenswrapper[4799]: I0216 12:54:31.244279 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 12:54:31 crc kubenswrapper[4799]: I0216 12:54:31.279028 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.422524974 podStartE2EDuration="6.279007558s" podCreationTimestamp="2026-02-16 12:54:25 +0000 UTC" firstStartedPulling="2026-02-16 12:54:26.443907378 +0000 UTC m=+1372.036922712" lastFinishedPulling="2026-02-16 12:54:30.300389962 +0000 UTC m=+1375.893405296" observedRunningTime="2026-02-16 12:54:31.264098392 +0000 UTC m=+1376.857113746" watchObservedRunningTime="2026-02-16 12:54:31.279007558 +0000 UTC m=+1376.872022882" Feb 16 12:54:36 crc kubenswrapper[4799]: I0216 12:54:36.589526 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 12:54:36 crc kubenswrapper[4799]: I0216 12:54:36.592922 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 12:54:36 crc kubenswrapper[4799]: I0216 12:54:36.595071 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.281118 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.307356 4799 generic.go:334] "Generic (PLEG): container finished" podID="82882565-4fa8-4300-9cb0-e66837c374aa" containerID="550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1" exitCode=137 Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.309039 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.309605 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82882565-4fa8-4300-9cb0-e66837c374aa","Type":"ContainerDied","Data":"550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1"} Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.309641 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82882565-4fa8-4300-9cb0-e66837c374aa","Type":"ContainerDied","Data":"f542a094425b91b87767b2eb937c7805d24e89da1ceb811f7d4444b0df44d293"} Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.309659 4799 scope.go:117] "RemoveContainer" containerID="550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.317007 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.342702 4799 scope.go:117] "RemoveContainer" containerID="550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1" Feb 16 12:54:37 crc kubenswrapper[4799]: E0216 12:54:37.343498 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1\": container with ID starting with 550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1 not found: ID does not exist" containerID="550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.343559 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1"} err="failed to get container status \"550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1\": rpc error: code = NotFound desc = could not find container \"550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1\": container with ID starting with 550aa17934fbc10b3aadfe4ccd57792c922c5fee14642ec8045a0166940e6bb1 not found: ID does not exist" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.359164 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8x6f\" (UniqueName: \"kubernetes.io/projected/82882565-4fa8-4300-9cb0-e66837c374aa-kube-api-access-n8x6f\") pod \"82882565-4fa8-4300-9cb0-e66837c374aa\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.359242 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-combined-ca-bundle\") pod \"82882565-4fa8-4300-9cb0-e66837c374aa\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.359579 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-config-data\") pod \"82882565-4fa8-4300-9cb0-e66837c374aa\" (UID: \"82882565-4fa8-4300-9cb0-e66837c374aa\") " Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.368139 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82882565-4fa8-4300-9cb0-e66837c374aa-kube-api-access-n8x6f" (OuterVolumeSpecName: "kube-api-access-n8x6f") pod "82882565-4fa8-4300-9cb0-e66837c374aa" (UID: "82882565-4fa8-4300-9cb0-e66837c374aa"). InnerVolumeSpecName "kube-api-access-n8x6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.401869 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82882565-4fa8-4300-9cb0-e66837c374aa" (UID: "82882565-4fa8-4300-9cb0-e66837c374aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.405426 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-config-data" (OuterVolumeSpecName: "config-data") pod "82882565-4fa8-4300-9cb0-e66837c374aa" (UID: "82882565-4fa8-4300-9cb0-e66837c374aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.462606 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.462660 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8x6f\" (UniqueName: \"kubernetes.io/projected/82882565-4fa8-4300-9cb0-e66837c374aa-kube-api-access-n8x6f\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.462677 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82882565-4fa8-4300-9cb0-e66837c374aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.519365 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.520109 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.522565 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.525660 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.646241 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.657864 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.674729 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:54:37 crc kubenswrapper[4799]: E0216 12:54:37.675303 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82882565-4fa8-4300-9cb0-e66837c374aa" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.675331 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="82882565-4fa8-4300-9cb0-e66837c374aa" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.675644 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="82882565-4fa8-4300-9cb0-e66837c374aa" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.676486 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.680299 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.680521 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.680639 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.683653 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.768310 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.768449 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4gf9\" (UniqueName: \"kubernetes.io/projected/fa473e85-e345-4e62-b615-b9fc5b5ac754-kube-api-access-g4gf9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.768808 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.768902 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.769190 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.871592 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.871656 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.871711 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.871766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.871821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4gf9\" (UniqueName: \"kubernetes.io/projected/fa473e85-e345-4e62-b615-b9fc5b5ac754-kube-api-access-g4gf9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.875757 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.876086 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.876667 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.876895 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa473e85-e345-4e62-b615-b9fc5b5ac754-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.888432 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4gf9\" (UniqueName: \"kubernetes.io/projected/fa473e85-e345-4e62-b615-b9fc5b5ac754-kube-api-access-g4gf9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa473e85-e345-4e62-b615-b9fc5b5ac754\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:37 crc kubenswrapper[4799]: I0216 12:54:37.999472 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.316584 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.323609 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.504044 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.551587 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9fb648679-bxg6f"] Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.553856 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.585566 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9fb648679-bxg6f"] Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.590317 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-svc\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.590385 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-sb\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.590500 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-nb\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.590580 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-swift-storage-0\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.590625 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/8e2b0d58-67b1-4f87-8e8f-819e56b29093-kube-api-access-6952s\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.590674 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-config\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.695818 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-svc\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.694722 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-svc\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.695933 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-sb\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.696940 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-sb\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.699153 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-nb\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.699987 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-nb\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.700536 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-swift-storage-0\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.704864 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/8e2b0d58-67b1-4f87-8e8f-819e56b29093-kube-api-access-6952s\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.705049 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-config\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.706062 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-config\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.704691 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-swift-storage-0\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.726134 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/8e2b0d58-67b1-4f87-8e8f-819e56b29093-kube-api-access-6952s\") pod \"dnsmasq-dns-9fb648679-bxg6f\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:38 crc kubenswrapper[4799]: I0216 12:54:38.934195 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:39 crc kubenswrapper[4799]: I0216 12:54:39.164073 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82882565-4fa8-4300-9cb0-e66837c374aa" path="/var/lib/kubelet/pods/82882565-4fa8-4300-9cb0-e66837c374aa/volumes" Feb 16 12:54:39 crc kubenswrapper[4799]: I0216 12:54:39.337142 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa473e85-e345-4e62-b615-b9fc5b5ac754","Type":"ContainerStarted","Data":"ca8877cee3c249b17550fb376f968748acafa734ff6d9bca2ace3e562e5493ab"} Feb 16 12:54:39 crc kubenswrapper[4799]: I0216 12:54:39.337232 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa473e85-e345-4e62-b615-b9fc5b5ac754","Type":"ContainerStarted","Data":"fcdf548c7337d2b1d805e81772744b96bd83fcf9c5fc9fbd79fcc907a7c0bbc6"} Feb 16 12:54:39 crc kubenswrapper[4799]: I0216 12:54:39.360301 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.360275302 podStartE2EDuration="2.360275302s" podCreationTimestamp="2026-02-16 12:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:39.357926554 +0000 UTC m=+1384.950941888" watchObservedRunningTime="2026-02-16 12:54:39.360275302 +0000 UTC m=+1384.953290636" Feb 16 12:54:39 crc kubenswrapper[4799]: I0216 12:54:39.520418 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9fb648679-bxg6f"] Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.351933 4799 generic.go:334] "Generic (PLEG): container finished" podID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerID="6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f" exitCode=0 Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.352057 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" event={"ID":"8e2b0d58-67b1-4f87-8e8f-819e56b29093","Type":"ContainerDied","Data":"6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f"} Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.352411 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" event={"ID":"8e2b0d58-67b1-4f87-8e8f-819e56b29093","Type":"ContainerStarted","Data":"882bef35ec5ac7f1ce89972717f53222d4a08ab94972b296719aeed657aa86b8"} Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.927330 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.928007 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-central-agent" containerID="cri-o://f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f" gracePeriod=30 Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.928241 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="sg-core" containerID="cri-o://165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7" gracePeriod=30 Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.928262 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-notification-agent" containerID="cri-o://98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774" gracePeriod=30 Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.928655 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="proxy-httpd" containerID="cri-o://162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e" gracePeriod=30 Feb 16 12:54:40 crc kubenswrapper[4799]: I0216 12:54:40.936604 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.227:3000/\": read tcp 10.217.0.2:49672->10.217.0.227:3000: read: connection reset by peer" Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.226941 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.370874 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" event={"ID":"8e2b0d58-67b1-4f87-8e8f-819e56b29093","Type":"ContainerStarted","Data":"deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83"} Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.370950 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.383951 4799 generic.go:334] "Generic (PLEG): container finished" podID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerID="162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e" exitCode=0 Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.383989 4799 generic.go:334] "Generic (PLEG): container finished" podID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerID="165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7" exitCode=2 Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.384002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerDied","Data":"162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e"} Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.384068 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerDied","Data":"165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7"} Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.384208 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-log" containerID="cri-o://61af341e07b37d2c6cdba6f679daf0102526c4f9c1ca72250052fda0cccaddca" gracePeriod=30 Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.384263 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-api" containerID="cri-o://f02fb478a9ab493e71d605c0f0636011e2a65981a64794466b45f9981d40781d" gracePeriod=30 Feb 16 12:54:41 crc kubenswrapper[4799]: I0216 12:54:41.407381 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" podStartSLOduration=3.407356474 podStartE2EDuration="3.407356474s" podCreationTimestamp="2026-02-16 12:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:41.391196512 +0000 UTC m=+1386.984211856" watchObservedRunningTime="2026-02-16 12:54:41.407356474 +0000 UTC m=+1387.000371808" Feb 16 12:54:42 crc kubenswrapper[4799]: I0216 12:54:42.399883 4799 generic.go:334] "Generic (PLEG): container finished" podID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerID="f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f" exitCode=0 Feb 16 12:54:42 crc kubenswrapper[4799]: I0216 12:54:42.399976 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerDied","Data":"f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f"} Feb 16 12:54:42 crc kubenswrapper[4799]: I0216 12:54:42.404716 4799 generic.go:334] "Generic (PLEG): container finished" podID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerID="61af341e07b37d2c6cdba6f679daf0102526c4f9c1ca72250052fda0cccaddca" exitCode=143 Feb 16 12:54:42 crc kubenswrapper[4799]: I0216 12:54:42.405511 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7f2eb3b-99ec-4288-bfee-a86318a69f79","Type":"ContainerDied","Data":"61af341e07b37d2c6cdba6f679daf0102526c4f9c1ca72250052fda0cccaddca"} Feb 16 12:54:42 crc kubenswrapper[4799]: I0216 12:54:42.999854 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.419292 4799 generic.go:334] "Generic (PLEG): container finished" podID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerID="f02fb478a9ab493e71d605c0f0636011e2a65981a64794466b45f9981d40781d" exitCode=0 Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.419361 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7f2eb3b-99ec-4288-bfee-a86318a69f79","Type":"ContainerDied","Data":"f02fb478a9ab493e71d605c0f0636011e2a65981a64794466b45f9981d40781d"} Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.527316 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.642398 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzxcz\" (UniqueName: \"kubernetes.io/projected/b7f2eb3b-99ec-4288-bfee-a86318a69f79-kube-api-access-wzxcz\") pod \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.642571 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-config-data\") pod \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.642678 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-combined-ca-bundle\") pod \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.642742 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f2eb3b-99ec-4288-bfee-a86318a69f79-logs\") pod \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\" (UID: \"b7f2eb3b-99ec-4288-bfee-a86318a69f79\") " Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.644104 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7f2eb3b-99ec-4288-bfee-a86318a69f79-logs" (OuterVolumeSpecName: "logs") pod "b7f2eb3b-99ec-4288-bfee-a86318a69f79" (UID: "b7f2eb3b-99ec-4288-bfee-a86318a69f79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.650623 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f2eb3b-99ec-4288-bfee-a86318a69f79-kube-api-access-wzxcz" (OuterVolumeSpecName: "kube-api-access-wzxcz") pod "b7f2eb3b-99ec-4288-bfee-a86318a69f79" (UID: "b7f2eb3b-99ec-4288-bfee-a86318a69f79"). InnerVolumeSpecName "kube-api-access-wzxcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.684558 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7f2eb3b-99ec-4288-bfee-a86318a69f79" (UID: "b7f2eb3b-99ec-4288-bfee-a86318a69f79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.695191 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-config-data" (OuterVolumeSpecName: "config-data") pod "b7f2eb3b-99ec-4288-bfee-a86318a69f79" (UID: "b7f2eb3b-99ec-4288-bfee-a86318a69f79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.746627 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzxcz\" (UniqueName: \"kubernetes.io/projected/b7f2eb3b-99ec-4288-bfee-a86318a69f79-kube-api-access-wzxcz\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.746674 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.746686 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f2eb3b-99ec-4288-bfee-a86318a69f79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:43 crc kubenswrapper[4799]: I0216 12:54:43.746699 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f2eb3b-99ec-4288-bfee-a86318a69f79-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.431450 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7f2eb3b-99ec-4288-bfee-a86318a69f79","Type":"ContainerDied","Data":"d5f919c726281e18337ae2c2a3f06899dbf3fdbabe8d5032991c7ae54ac92dc1"} Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.431509 4799 scope.go:117] "RemoveContainer" containerID="f02fb478a9ab493e71d605c0f0636011e2a65981a64794466b45f9981d40781d" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.431527 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.506634 4799 scope.go:117] "RemoveContainer" containerID="61af341e07b37d2c6cdba6f679daf0102526c4f9c1ca72250052fda0cccaddca" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.510232 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.519272 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.546489 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:44 crc kubenswrapper[4799]: E0216 12:54:44.547069 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-log" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.547088 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-log" Feb 16 12:54:44 crc kubenswrapper[4799]: E0216 12:54:44.547149 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-api" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.547157 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-api" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.547376 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-log" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.547406 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" containerName="nova-api-api" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.548754 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.551743 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.551926 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.552553 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.557439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.665034 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.665446 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dbw\" (UniqueName: \"kubernetes.io/projected/9d723753-db9a-4e04-ab35-5949e0af15fa-kube-api-access-h7dbw\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.665552 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-public-tls-certs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.665599 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d723753-db9a-4e04-ab35-5949e0af15fa-logs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.665664 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.665710 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-config-data\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.768267 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-public-tls-certs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.768353 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d723753-db9a-4e04-ab35-5949e0af15fa-logs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.768434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.769237 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d723753-db9a-4e04-ab35-5949e0af15fa-logs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.769519 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-config-data\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.769787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.771969 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7dbw\" (UniqueName: \"kubernetes.io/projected/9d723753-db9a-4e04-ab35-5949e0af15fa-kube-api-access-h7dbw\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.774954 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.779333 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-public-tls-certs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.781630 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.788112 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-config-data\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.792610 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7dbw\" (UniqueName: \"kubernetes.io/projected/9d723753-db9a-4e04-ab35-5949e0af15fa-kube-api-access-h7dbw\") pod \"nova-api-0\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " pod="openstack/nova-api-0" Feb 16 12:54:44 crc kubenswrapper[4799]: I0216 12:54:44.877095 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.015627 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078391 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vr8\" (UniqueName: \"kubernetes.io/projected/2736a891-3240-4fa6-beb0-24e13b7fbd8c-kube-api-access-n2vr8\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078699 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-combined-ca-bundle\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078748 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-run-httpd\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078805 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-config-data\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078849 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-sg-core-conf-yaml\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078889 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-ceilometer-tls-certs\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.078998 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-log-httpd\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.079068 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-scripts\") pod \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\" (UID: \"2736a891-3240-4fa6-beb0-24e13b7fbd8c\") " Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.079222 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.079899 4799 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.079933 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.084253 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-scripts" (OuterVolumeSpecName: "scripts") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.088974 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2736a891-3240-4fa6-beb0-24e13b7fbd8c-kube-api-access-n2vr8" (OuterVolumeSpecName: "kube-api-access-n2vr8") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "kube-api-access-n2vr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.122667 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.151772 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.166776 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f2eb3b-99ec-4288-bfee-a86318a69f79" path="/var/lib/kubelet/pods/b7f2eb3b-99ec-4288-bfee-a86318a69f79/volumes" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.182161 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vr8\" (UniqueName: \"kubernetes.io/projected/2736a891-3240-4fa6-beb0-24e13b7fbd8c-kube-api-access-n2vr8\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.182214 4799 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.182226 4799 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.182238 4799 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2736a891-3240-4fa6-beb0-24e13b7fbd8c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.182251 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.208565 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.212260 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-config-data" (OuterVolumeSpecName: "config-data") pod "2736a891-3240-4fa6-beb0-24e13b7fbd8c" (UID: "2736a891-3240-4fa6-beb0-24e13b7fbd8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.284763 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.284815 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2736a891-3240-4fa6-beb0-24e13b7fbd8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.350467 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.448323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d723753-db9a-4e04-ab35-5949e0af15fa","Type":"ContainerStarted","Data":"84efef3135f5edbb8897d0d7d5c083b7e68c515f01ed090737ca9c6cc9954cf3"} Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.451833 4799 generic.go:334] "Generic (PLEG): container finished" podID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerID="98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774" exitCode=0 Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.452052 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.452047 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerDied","Data":"98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774"} Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.452116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2736a891-3240-4fa6-beb0-24e13b7fbd8c","Type":"ContainerDied","Data":"855979006a8eb749e47b5894a195e4cd7e83e2834b28b9f6b84c6ebd23475ba5"} Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.452166 4799 scope.go:117] "RemoveContainer" containerID="162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.540817 4799 scope.go:117] "RemoveContainer" containerID="165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.564496 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.582219 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.585060 4799 scope.go:117] "RemoveContainer" containerID="98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.593873 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.594404 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-notification-agent" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594426 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-notification-agent" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.594441 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="proxy-httpd" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594447 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="proxy-httpd" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.594460 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-central-agent" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594466 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-central-agent" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.594479 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="sg-core" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594486 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="sg-core" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594670 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="sg-core" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594688 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="proxy-httpd" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594704 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-notification-agent" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.594713 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" containerName="ceilometer-central-agent" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.596534 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.599944 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.600148 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.600235 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.604081 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.630963 4799 scope.go:117] "RemoveContainer" containerID="f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.661518 4799 scope.go:117] "RemoveContainer" containerID="162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.662183 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e\": container with ID starting with 162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e not found: ID does not exist" containerID="162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.662215 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e"} err="failed to get container status \"162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e\": rpc error: code = NotFound desc = could not find container \"162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e\": container with ID starting with 162231a3592a6fac46b6007e40d9504d3742412a1aab3ce5bd2f03cbd529292e not found: ID does not exist" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.662240 4799 scope.go:117] "RemoveContainer" containerID="165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.662709 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7\": container with ID starting with 165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7 not found: ID does not exist" containerID="165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.662914 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7"} err="failed to get container status \"165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7\": rpc error: code = NotFound desc = could not find container \"165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7\": container with ID starting with 165f4965e77dade92827e360783794858392e0648304dde148218883b82fdbd7 not found: ID does not exist" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.662971 4799 scope.go:117] "RemoveContainer" containerID="98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.663569 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774\": container with ID starting with 98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774 not found: ID does not exist" containerID="98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.663611 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774"} err="failed to get container status \"98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774\": rpc error: code = NotFound desc = could not find container \"98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774\": container with ID starting with 98b8de5c21715a03df395939e46ba7e2214133b4e73ff352679e0c2c2b9e9774 not found: ID does not exist" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.664508 4799 scope.go:117] "RemoveContainer" containerID="f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f" Feb 16 12:54:45 crc kubenswrapper[4799]: E0216 12:54:45.665334 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f\": container with ID starting with f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f not found: ID does not exist" containerID="f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.665383 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f"} err="failed to get container status \"f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f\": rpc error: code = NotFound desc = could not find container \"f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f\": container with ID starting with f3326d81ec78f08a51a676015ae23564673ae031423e1b5d97aede274c41127f not found: ID does not exist" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698443 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698534 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnps\" (UniqueName: \"kubernetes.io/projected/13a099ed-6620-4310-85c7-986b1a366a1b-kube-api-access-6nnps\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698562 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13a099ed-6620-4310-85c7-986b1a366a1b-run-httpd\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698582 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698668 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13a099ed-6620-4310-85c7-986b1a366a1b-log-httpd\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698701 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-config-data\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698725 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-scripts\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.698750 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.800991 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnps\" (UniqueName: \"kubernetes.io/projected/13a099ed-6620-4310-85c7-986b1a366a1b-kube-api-access-6nnps\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.801400 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13a099ed-6620-4310-85c7-986b1a366a1b-run-httpd\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.801559 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.801740 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13a099ed-6620-4310-85c7-986b1a366a1b-log-httpd\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.801907 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-config-data\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.802044 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-scripts\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.802175 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.802341 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.804103 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13a099ed-6620-4310-85c7-986b1a366a1b-run-httpd\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.804380 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13a099ed-6620-4310-85c7-986b1a366a1b-log-httpd\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.808331 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-scripts\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.808746 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.809214 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.810927 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.811727 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a099ed-6620-4310-85c7-986b1a366a1b-config-data\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.827273 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnps\" (UniqueName: \"kubernetes.io/projected/13a099ed-6620-4310-85c7-986b1a366a1b-kube-api-access-6nnps\") pod \"ceilometer-0\" (UID: \"13a099ed-6620-4310-85c7-986b1a366a1b\") " pod="openstack/ceilometer-0" Feb 16 12:54:45 crc kubenswrapper[4799]: I0216 12:54:45.924268 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 12:54:46 crc kubenswrapper[4799]: I0216 12:54:46.423253 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 12:54:46 crc kubenswrapper[4799]: W0216 12:54:46.424571 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a099ed_6620_4310_85c7_986b1a366a1b.slice/crio-2d2442b739544bea3e9f08c78e492a105bb5e3f79efbb91d23767275ef633d70 WatchSource:0}: Error finding container 2d2442b739544bea3e9f08c78e492a105bb5e3f79efbb91d23767275ef633d70: Status 404 returned error can't find the container with id 2d2442b739544bea3e9f08c78e492a105bb5e3f79efbb91d23767275ef633d70 Feb 16 12:54:46 crc kubenswrapper[4799]: I0216 12:54:46.466942 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d723753-db9a-4e04-ab35-5949e0af15fa","Type":"ContainerStarted","Data":"0ad3295a07b4a07cfe006a25ca47edae4a8af8692004d88f32135df24c3fad0c"} Feb 16 12:54:46 crc kubenswrapper[4799]: I0216 12:54:46.467298 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d723753-db9a-4e04-ab35-5949e0af15fa","Type":"ContainerStarted","Data":"65e228536f76b66350fefe1ff284534af82751994e950d51777e241018e1dbf3"} Feb 16 12:54:46 crc kubenswrapper[4799]: I0216 12:54:46.469026 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13a099ed-6620-4310-85c7-986b1a366a1b","Type":"ContainerStarted","Data":"2d2442b739544bea3e9f08c78e492a105bb5e3f79efbb91d23767275ef633d70"} Feb 16 12:54:46 crc kubenswrapper[4799]: I0216 12:54:46.494379 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.494356739 podStartE2EDuration="2.494356739s" podCreationTimestamp="2026-02-16 12:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:46.485137625 +0000 UTC m=+1392.078152979" watchObservedRunningTime="2026-02-16 12:54:46.494356739 +0000 UTC m=+1392.087372073" Feb 16 12:54:47 crc kubenswrapper[4799]: I0216 12:54:47.161553 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2736a891-3240-4fa6-beb0-24e13b7fbd8c" path="/var/lib/kubelet/pods/2736a891-3240-4fa6-beb0-24e13b7fbd8c/volumes" Feb 16 12:54:47 crc kubenswrapper[4799]: I0216 12:54:47.483463 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13a099ed-6620-4310-85c7-986b1a366a1b","Type":"ContainerStarted","Data":"b4734fbfd2a1b9738ae4b62439836a925745cbaa1aeeb600f7dc475ee4e921a5"} Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.000654 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.060911 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.495583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13a099ed-6620-4310-85c7-986b1a366a1b","Type":"ContainerStarted","Data":"0ee9e3d0ec3adc7c6b47226dcacda39fe74a33902ebd1509266e4ebf35c0be43"} Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.514420 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.782966 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-69hl6"] Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.784595 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.787997 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.788173 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.797815 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-69hl6"] Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.881856 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.881918 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-scripts\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.882568 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvfj\" (UniqueName: \"kubernetes.io/projected/e6474380-de01-4e68-bcea-caf2ce9bb2aa-kube-api-access-cfvfj\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.882634 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-config-data\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.936300 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.984919 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvfj\" (UniqueName: \"kubernetes.io/projected/e6474380-de01-4e68-bcea-caf2ce9bb2aa-kube-api-access-cfvfj\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.985028 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-config-data\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.985238 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:48 crc kubenswrapper[4799]: I0216 12:54:48.985267 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-scripts\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:48.994278 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-config-data\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:48.995882 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-scripts\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.010708 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.021215 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvfj\" (UniqueName: \"kubernetes.io/projected/e6474380-de01-4e68-bcea-caf2ce9bb2aa-kube-api-access-cfvfj\") pod \"nova-cell1-cell-mapping-69hl6\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.028289 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb95f7db7-lrdp9"] Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.028595 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerName="dnsmasq-dns" containerID="cri-o://d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab" gracePeriod=10 Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.153883 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.483657 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.515355 4799 generic.go:334] "Generic (PLEG): container finished" podID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerID="d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab" exitCode=0 Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.515444 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" event={"ID":"edbeb15e-e56a-4311-82a6-71f46a0b81d8","Type":"ContainerDied","Data":"d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab"} Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.515482 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" event={"ID":"edbeb15e-e56a-4311-82a6-71f46a0b81d8","Type":"ContainerDied","Data":"6992f80d9e0b12225314c9c6811903a3e76816e775815ac3e5baf21b6efb8f22"} Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.515507 4799 scope.go:117] "RemoveContainer" containerID="d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.515574 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb95f7db7-lrdp9" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.525257 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13a099ed-6620-4310-85c7-986b1a366a1b","Type":"ContainerStarted","Data":"df4dda4b3168a2f356689e6199c58f5575cd9909061e0e77161a653a41c8ff36"} Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.556306 4799 scope.go:117] "RemoveContainer" containerID="114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.590623 4799 scope.go:117] "RemoveContainer" containerID="d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab" Feb 16 12:54:49 crc kubenswrapper[4799]: E0216 12:54:49.591413 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab\": container with ID starting with d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab not found: ID does not exist" containerID="d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.591449 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab"} err="failed to get container status \"d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab\": rpc error: code = NotFound desc = could not find container \"d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab\": container with ID starting with d1f4a40162fb234eafaee9f6da9f43bc3c1e60cd25c4a050e96e3de94f2b9fab not found: ID does not exist" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.591476 4799 scope.go:117] "RemoveContainer" containerID="114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3" Feb 16 12:54:49 crc kubenswrapper[4799]: E0216 12:54:49.591746 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3\": container with ID starting with 114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3 not found: ID does not exist" containerID="114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.591767 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3"} err="failed to get container status \"114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3\": rpc error: code = NotFound desc = could not find container \"114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3\": container with ID starting with 114ee46e702c5d7fd67520cdefc1236032397839ce153ff9771f3459abf4fea3 not found: ID does not exist" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.606649 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-config\") pod \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.606756 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-sb\") pod \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.606844 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-svc\") pod \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.606896 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-swift-storage-0\") pod \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.606961 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgk42\" (UniqueName: \"kubernetes.io/projected/edbeb15e-e56a-4311-82a6-71f46a0b81d8-kube-api-access-kgk42\") pod \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.606990 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-nb\") pod \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\" (UID: \"edbeb15e-e56a-4311-82a6-71f46a0b81d8\") " Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.613482 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbeb15e-e56a-4311-82a6-71f46a0b81d8-kube-api-access-kgk42" (OuterVolumeSpecName: "kube-api-access-kgk42") pod "edbeb15e-e56a-4311-82a6-71f46a0b81d8" (UID: "edbeb15e-e56a-4311-82a6-71f46a0b81d8"). InnerVolumeSpecName "kube-api-access-kgk42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.681535 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "edbeb15e-e56a-4311-82a6-71f46a0b81d8" (UID: "edbeb15e-e56a-4311-82a6-71f46a0b81d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.688508 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "edbeb15e-e56a-4311-82a6-71f46a0b81d8" (UID: "edbeb15e-e56a-4311-82a6-71f46a0b81d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.690276 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "edbeb15e-e56a-4311-82a6-71f46a0b81d8" (UID: "edbeb15e-e56a-4311-82a6-71f46a0b81d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.698347 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-config" (OuterVolumeSpecName: "config") pod "edbeb15e-e56a-4311-82a6-71f46a0b81d8" (UID: "edbeb15e-e56a-4311-82a6-71f46a0b81d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.704026 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "edbeb15e-e56a-4311-82a6-71f46a0b81d8" (UID: "edbeb15e-e56a-4311-82a6-71f46a0b81d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.710189 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.710235 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.710249 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgk42\" (UniqueName: \"kubernetes.io/projected/edbeb15e-e56a-4311-82a6-71f46a0b81d8-kube-api-access-kgk42\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.710260 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.710271 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.710281 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edbeb15e-e56a-4311-82a6-71f46a0b81d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:54:49 crc kubenswrapper[4799]: I0216 12:54:49.771983 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-69hl6"] Feb 16 12:54:49 crc kubenswrapper[4799]: W0216 12:54:49.773090 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6474380_de01_4e68_bcea_caf2ce9bb2aa.slice/crio-8ab3affce6b5f63459d85f5086cd1d001c5f39d9f4943168307420aa37fcbdf2 WatchSource:0}: Error finding container 8ab3affce6b5f63459d85f5086cd1d001c5f39d9f4943168307420aa37fcbdf2: Status 404 returned error can't find the container with id 8ab3affce6b5f63459d85f5086cd1d001c5f39d9f4943168307420aa37fcbdf2 Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.045575 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb95f7db7-lrdp9"] Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.057076 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb95f7db7-lrdp9"] Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.536712 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-69hl6" event={"ID":"e6474380-de01-4e68-bcea-caf2ce9bb2aa","Type":"ContainerStarted","Data":"112cb8e3158f1bb81150d382e2563fda0af1811256c6fc0a501f0624d1bb6885"} Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.537063 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-69hl6" event={"ID":"e6474380-de01-4e68-bcea-caf2ce9bb2aa","Type":"ContainerStarted","Data":"8ab3affce6b5f63459d85f5086cd1d001c5f39d9f4943168307420aa37fcbdf2"} Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.542847 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13a099ed-6620-4310-85c7-986b1a366a1b","Type":"ContainerStarted","Data":"ed2fb23c031fcf9b35e5e2f8b232fdd51234a629e4f66b7535e97f9693090198"} Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.544025 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.558212 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-69hl6" podStartSLOduration=2.558189295 podStartE2EDuration="2.558189295s" podCreationTimestamp="2026-02-16 12:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:50.551544645 +0000 UTC m=+1396.144559979" watchObservedRunningTime="2026-02-16 12:54:50.558189295 +0000 UTC m=+1396.151204629" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.580035 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.139530835 podStartE2EDuration="5.580010949s" podCreationTimestamp="2026-02-16 12:54:45 +0000 UTC" firstStartedPulling="2026-02-16 12:54:46.427716353 +0000 UTC m=+1392.020731687" lastFinishedPulling="2026-02-16 12:54:49.868196467 +0000 UTC m=+1395.461211801" observedRunningTime="2026-02-16 12:54:50.57828949 +0000 UTC m=+1396.171304824" watchObservedRunningTime="2026-02-16 12:54:50.580010949 +0000 UTC m=+1396.173026293" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.902368 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsv5h"] Feb 16 12:54:50 crc kubenswrapper[4799]: E0216 12:54:50.903444 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerName="init" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.903467 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerName="init" Feb 16 12:54:50 crc kubenswrapper[4799]: E0216 12:54:50.903482 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerName="dnsmasq-dns" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.903491 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerName="dnsmasq-dns" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.903734 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" containerName="dnsmasq-dns" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.905413 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:50 crc kubenswrapper[4799]: I0216 12:54:50.914839 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsv5h"] Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.042270 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf8cac2-5686-40a2-91ee-86b8dc75db37-utilities\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.042540 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcczs\" (UniqueName: \"kubernetes.io/projected/7cf8cac2-5686-40a2-91ee-86b8dc75db37-kube-api-access-bcczs\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.042580 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf8cac2-5686-40a2-91ee-86b8dc75db37-catalog-content\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.145100 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf8cac2-5686-40a2-91ee-86b8dc75db37-utilities\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.145308 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcczs\" (UniqueName: \"kubernetes.io/projected/7cf8cac2-5686-40a2-91ee-86b8dc75db37-kube-api-access-bcczs\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.145335 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf8cac2-5686-40a2-91ee-86b8dc75db37-catalog-content\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.145678 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf8cac2-5686-40a2-91ee-86b8dc75db37-utilities\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.145977 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf8cac2-5686-40a2-91ee-86b8dc75db37-catalog-content\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.161146 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edbeb15e-e56a-4311-82a6-71f46a0b81d8" path="/var/lib/kubelet/pods/edbeb15e-e56a-4311-82a6-71f46a0b81d8/volumes" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.176502 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcczs\" (UniqueName: \"kubernetes.io/projected/7cf8cac2-5686-40a2-91ee-86b8dc75db37-kube-api-access-bcczs\") pod \"certified-operators-qsv5h\" (UID: \"7cf8cac2-5686-40a2-91ee-86b8dc75db37\") " pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.242056 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.519643 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m952t"] Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.544678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.618266 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m952t"] Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.663053 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6gff\" (UniqueName: \"kubernetes.io/projected/dd9766ad-b126-4eff-bd30-0ffedfcff830-kube-api-access-z6gff\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.664440 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-utilities\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.667299 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-catalog-content\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.769210 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-catalog-content\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.769267 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6gff\" (UniqueName: \"kubernetes.io/projected/dd9766ad-b126-4eff-bd30-0ffedfcff830-kube-api-access-z6gff\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.769326 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-utilities\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.769865 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-utilities\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.770088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-catalog-content\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.791327 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6gff\" (UniqueName: \"kubernetes.io/projected/dd9766ad-b126-4eff-bd30-0ffedfcff830-kube-api-access-z6gff\") pod \"redhat-operators-m952t\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.822492 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsv5h"] Feb 16 12:54:51 crc kubenswrapper[4799]: I0216 12:54:51.948286 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:54:52 crc kubenswrapper[4799]: I0216 12:54:52.483097 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m952t"] Feb 16 12:54:52 crc kubenswrapper[4799]: I0216 12:54:52.625537 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerStarted","Data":"5d0ff0484805be010d5685a2068521138719b8c02cc53f24e9dbdd7649473e98"} Feb 16 12:54:52 crc kubenswrapper[4799]: I0216 12:54:52.630897 4799 generic.go:334] "Generic (PLEG): container finished" podID="7cf8cac2-5686-40a2-91ee-86b8dc75db37" containerID="25d8ec8cad73be1cf48c17b9b14ce4b29d2d830d04cd7205a7ed8ce3c53abb5b" exitCode=0 Feb 16 12:54:52 crc kubenswrapper[4799]: I0216 12:54:52.632188 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsv5h" event={"ID":"7cf8cac2-5686-40a2-91ee-86b8dc75db37","Type":"ContainerDied","Data":"25d8ec8cad73be1cf48c17b9b14ce4b29d2d830d04cd7205a7ed8ce3c53abb5b"} Feb 16 12:54:52 crc kubenswrapper[4799]: I0216 12:54:52.632255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsv5h" event={"ID":"7cf8cac2-5686-40a2-91ee-86b8dc75db37","Type":"ContainerStarted","Data":"62c59f0d7ce70d2acb4e8f174928957fc872678ac02d217211b303e844dc40ea"} Feb 16 12:54:53 crc kubenswrapper[4799]: I0216 12:54:53.643893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerDied","Data":"1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689"} Feb 16 12:54:53 crc kubenswrapper[4799]: I0216 12:54:53.643759 4799 generic.go:334] "Generic (PLEG): container finished" podID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerID="1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689" exitCode=0 Feb 16 12:54:54 crc kubenswrapper[4799]: I0216 12:54:54.659038 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerStarted","Data":"b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c"} Feb 16 12:54:54 crc kubenswrapper[4799]: I0216 12:54:54.880417 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:54:54 crc kubenswrapper[4799]: I0216 12:54:54.880880 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:54:55 crc kubenswrapper[4799]: I0216 12:54:55.920440 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:55 crc kubenswrapper[4799]: I0216 12:54:55.920512 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:54:56 crc kubenswrapper[4799]: I0216 12:54:56.688643 4799 generic.go:334] "Generic (PLEG): container finished" podID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerID="b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c" exitCode=0 Feb 16 12:54:56 crc kubenswrapper[4799]: I0216 12:54:56.688719 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerDied","Data":"b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c"} Feb 16 12:54:58 crc kubenswrapper[4799]: I0216 12:54:58.714443 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsv5h" event={"ID":"7cf8cac2-5686-40a2-91ee-86b8dc75db37","Type":"ContainerStarted","Data":"d9547e76002246da5503577228485364d4a466d6e1b37636ff958b4a0099e379"} Feb 16 12:54:58 crc kubenswrapper[4799]: I0216 12:54:58.716034 4799 generic.go:334] "Generic (PLEG): container finished" podID="e6474380-de01-4e68-bcea-caf2ce9bb2aa" containerID="112cb8e3158f1bb81150d382e2563fda0af1811256c6fc0a501f0624d1bb6885" exitCode=0 Feb 16 12:54:58 crc kubenswrapper[4799]: I0216 12:54:58.716099 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-69hl6" event={"ID":"e6474380-de01-4e68-bcea-caf2ce9bb2aa","Type":"ContainerDied","Data":"112cb8e3158f1bb81150d382e2563fda0af1811256c6fc0a501f0624d1bb6885"} Feb 16 12:54:59 crc kubenswrapper[4799]: I0216 12:54:59.729513 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerStarted","Data":"43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2"} Feb 16 12:54:59 crc kubenswrapper[4799]: I0216 12:54:59.732351 4799 generic.go:334] "Generic (PLEG): container finished" podID="7cf8cac2-5686-40a2-91ee-86b8dc75db37" containerID="d9547e76002246da5503577228485364d4a466d6e1b37636ff958b4a0099e379" exitCode=0 Feb 16 12:54:59 crc kubenswrapper[4799]: I0216 12:54:59.732394 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsv5h" event={"ID":"7cf8cac2-5686-40a2-91ee-86b8dc75db37","Type":"ContainerDied","Data":"d9547e76002246da5503577228485364d4a466d6e1b37636ff958b4a0099e379"} Feb 16 12:54:59 crc kubenswrapper[4799]: I0216 12:54:59.761095 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m952t" podStartSLOduration=3.910188924 podStartE2EDuration="8.761066806s" podCreationTimestamp="2026-02-16 12:54:51 +0000 UTC" firstStartedPulling="2026-02-16 12:54:53.660162694 +0000 UTC m=+1399.253178038" lastFinishedPulling="2026-02-16 12:54:58.511040586 +0000 UTC m=+1404.104055920" observedRunningTime="2026-02-16 12:54:59.74685338 +0000 UTC m=+1405.339868714" watchObservedRunningTime="2026-02-16 12:54:59.761066806 +0000 UTC m=+1405.354082140" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.128099 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.298762 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-config-data\") pod \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.299082 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-scripts\") pod \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.299106 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-combined-ca-bundle\") pod \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.299157 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfvfj\" (UniqueName: \"kubernetes.io/projected/e6474380-de01-4e68-bcea-caf2ce9bb2aa-kube-api-access-cfvfj\") pod \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\" (UID: \"e6474380-de01-4e68-bcea-caf2ce9bb2aa\") " Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.304659 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6474380-de01-4e68-bcea-caf2ce9bb2aa-kube-api-access-cfvfj" (OuterVolumeSpecName: "kube-api-access-cfvfj") pod "e6474380-de01-4e68-bcea-caf2ce9bb2aa" (UID: "e6474380-de01-4e68-bcea-caf2ce9bb2aa"). InnerVolumeSpecName "kube-api-access-cfvfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.304868 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-scripts" (OuterVolumeSpecName: "scripts") pod "e6474380-de01-4e68-bcea-caf2ce9bb2aa" (UID: "e6474380-de01-4e68-bcea-caf2ce9bb2aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.338486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6474380-de01-4e68-bcea-caf2ce9bb2aa" (UID: "e6474380-de01-4e68-bcea-caf2ce9bb2aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.348507 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-config-data" (OuterVolumeSpecName: "config-data") pod "e6474380-de01-4e68-bcea-caf2ce9bb2aa" (UID: "e6474380-de01-4e68-bcea-caf2ce9bb2aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.401702 4799 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.401743 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.401753 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfvfj\" (UniqueName: \"kubernetes.io/projected/e6474380-de01-4e68-bcea-caf2ce9bb2aa-kube-api-access-cfvfj\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.401761 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6474380-de01-4e68-bcea-caf2ce9bb2aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.749644 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsv5h" event={"ID":"7cf8cac2-5686-40a2-91ee-86b8dc75db37","Type":"ContainerStarted","Data":"d2d363a5024fde2260bd11d6071c4f6731734ef49443ee3633361452b82c7c71"} Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.752466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-69hl6" event={"ID":"e6474380-de01-4e68-bcea-caf2ce9bb2aa","Type":"ContainerDied","Data":"8ab3affce6b5f63459d85f5086cd1d001c5f39d9f4943168307420aa37fcbdf2"} Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.752502 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab3affce6b5f63459d85f5086cd1d001c5f39d9f4943168307420aa37fcbdf2" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.752511 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-69hl6" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.785304 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsv5h" podStartSLOduration=3.222887434 podStartE2EDuration="10.785276255s" podCreationTimestamp="2026-02-16 12:54:50 +0000 UTC" firstStartedPulling="2026-02-16 12:54:52.633724981 +0000 UTC m=+1398.226740315" lastFinishedPulling="2026-02-16 12:55:00.196113802 +0000 UTC m=+1405.789129136" observedRunningTime="2026-02-16 12:55:00.776912156 +0000 UTC m=+1406.369927490" watchObservedRunningTime="2026-02-16 12:55:00.785276255 +0000 UTC m=+1406.378291589" Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.958718 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.959304 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-log" containerID="cri-o://65e228536f76b66350fefe1ff284534af82751994e950d51777e241018e1dbf3" gracePeriod=30 Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.959508 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-api" containerID="cri-o://0ad3295a07b4a07cfe006a25ca47edae4a8af8692004d88f32135df24c3fad0c" gracePeriod=30 Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.977914 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:55:00 crc kubenswrapper[4799]: I0216 12:55:00.978408 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f52827a2-06e0-4f60-ac3d-2efdc2b182d4" containerName="nova-scheduler-scheduler" containerID="cri-o://426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2" gracePeriod=30 Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.032489 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.032767 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-log" containerID="cri-o://ed39cac0c1fdaf791042a69457f1ac122032a9b62b834185ad2e362cbf11b0ff" gracePeriod=30 Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.032851 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-metadata" containerID="cri-o://973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2" gracePeriod=30 Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.243307 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.243376 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.764473 4799 generic.go:334] "Generic (PLEG): container finished" podID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerID="65e228536f76b66350fefe1ff284534af82751994e950d51777e241018e1dbf3" exitCode=143 Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.764561 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d723753-db9a-4e04-ab35-5949e0af15fa","Type":"ContainerDied","Data":"65e228536f76b66350fefe1ff284534af82751994e950d51777e241018e1dbf3"} Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.767245 4799 generic.go:334] "Generic (PLEG): container finished" podID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerID="ed39cac0c1fdaf791042a69457f1ac122032a9b62b834185ad2e362cbf11b0ff" exitCode=143 Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.767328 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01","Type":"ContainerDied","Data":"ed39cac0c1fdaf791042a69457f1ac122032a9b62b834185ad2e362cbf11b0ff"} Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.949466 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:55:01 crc kubenswrapper[4799]: I0216 12:55:01.949521 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.293298 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qsv5h" podUID="7cf8cac2-5686-40a2-91ee-86b8dc75db37" containerName="registry-server" probeResult="failure" output=< Feb 16 12:55:02 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:55:02 crc kubenswrapper[4799]: > Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.671725 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": read tcp 10.217.0.2:55266->10.217.0.224:8775: read: connection reset by peer" Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.671797 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": read tcp 10.217.0.2:55252->10.217.0.224:8775: read: connection reset by peer" Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.790715 4799 generic.go:334] "Generic (PLEG): container finished" podID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerID="0ad3295a07b4a07cfe006a25ca47edae4a8af8692004d88f32135df24c3fad0c" exitCode=0 Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.790784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d723753-db9a-4e04-ab35-5949e0af15fa","Type":"ContainerDied","Data":"0ad3295a07b4a07cfe006a25ca47edae4a8af8692004d88f32135df24c3fad0c"} Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.793359 4799 generic.go:334] "Generic (PLEG): container finished" podID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerID="973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2" exitCode=0 Feb 16 12:55:02 crc kubenswrapper[4799]: I0216 12:55:02.794440 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01","Type":"ContainerDied","Data":"973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2"} Feb 16 12:55:02 crc kubenswrapper[4799]: E0216 12:55:02.839032 4799 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e2b3fcf_00ec_4d11_9d47_b1aeb9b33a01.slice/crio-conmon-973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e2b3fcf_00ec_4d11_9d47_b1aeb9b33a01.slice/crio-973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2.scope\": RecentStats: unable to find data in memory cache]" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.004333 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m952t" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" probeResult="failure" output=< Feb 16 12:55:03 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:55:03 crc kubenswrapper[4799]: > Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.295081 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.302430 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.429894 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-combined-ca-bundle\") pod \"9d723753-db9a-4e04-ab35-5949e0af15fa\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.429968 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7dbw\" (UniqueName: \"kubernetes.io/projected/9d723753-db9a-4e04-ab35-5949e0af15fa-kube-api-access-h7dbw\") pod \"9d723753-db9a-4e04-ab35-5949e0af15fa\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430011 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-combined-ca-bundle\") pod \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430049 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-internal-tls-certs\") pod \"9d723753-db9a-4e04-ab35-5949e0af15fa\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430079 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-public-tls-certs\") pod \"9d723753-db9a-4e04-ab35-5949e0af15fa\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-nova-metadata-tls-certs\") pod \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430182 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-config-data\") pod \"9d723753-db9a-4e04-ab35-5949e0af15fa\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430205 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-config-data\") pod \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430292 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d723753-db9a-4e04-ab35-5949e0af15fa-logs\") pod \"9d723753-db9a-4e04-ab35-5949e0af15fa\" (UID: \"9d723753-db9a-4e04-ab35-5949e0af15fa\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430318 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wb5s\" (UniqueName: \"kubernetes.io/projected/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-kube-api-access-5wb5s\") pod \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.430339 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-logs\") pod \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\" (UID: \"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01\") " Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.431492 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d723753-db9a-4e04-ab35-5949e0af15fa-logs" (OuterVolumeSpecName: "logs") pod "9d723753-db9a-4e04-ab35-5949e0af15fa" (UID: "9d723753-db9a-4e04-ab35-5949e0af15fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.432249 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d723753-db9a-4e04-ab35-5949e0af15fa-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.434526 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-logs" (OuterVolumeSpecName: "logs") pod "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" (UID: "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.438748 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d723753-db9a-4e04-ab35-5949e0af15fa-kube-api-access-h7dbw" (OuterVolumeSpecName: "kube-api-access-h7dbw") pod "9d723753-db9a-4e04-ab35-5949e0af15fa" (UID: "9d723753-db9a-4e04-ab35-5949e0af15fa"). InnerVolumeSpecName "kube-api-access-h7dbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.440591 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-kube-api-access-5wb5s" (OuterVolumeSpecName: "kube-api-access-5wb5s") pod "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" (UID: "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01"). InnerVolumeSpecName "kube-api-access-5wb5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.478919 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-config-data" (OuterVolumeSpecName: "config-data") pod "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" (UID: "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.479462 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" (UID: "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.480782 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d723753-db9a-4e04-ab35-5949e0af15fa" (UID: "9d723753-db9a-4e04-ab35-5949e0af15fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.496258 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-config-data" (OuterVolumeSpecName: "config-data") pod "9d723753-db9a-4e04-ab35-5949e0af15fa" (UID: "9d723753-db9a-4e04-ab35-5949e0af15fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.515311 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9d723753-db9a-4e04-ab35-5949e0af15fa" (UID: "9d723753-db9a-4e04-ab35-5949e0af15fa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.521732 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9d723753-db9a-4e04-ab35-5949e0af15fa" (UID: "9d723753-db9a-4e04-ab35-5949e0af15fa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.529388 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" (UID: "5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534358 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534409 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7dbw\" (UniqueName: \"kubernetes.io/projected/9d723753-db9a-4e04-ab35-5949e0af15fa-kube-api-access-h7dbw\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534429 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534442 4799 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534456 4799 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534468 4799 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534481 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d723753-db9a-4e04-ab35-5949e0af15fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534494 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534505 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wb5s\" (UniqueName: \"kubernetes.io/projected/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-kube-api-access-5wb5s\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.534518 4799 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01-logs\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.805219 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d723753-db9a-4e04-ab35-5949e0af15fa","Type":"ContainerDied","Data":"84efef3135f5edbb8897d0d7d5c083b7e68c515f01ed090737ca9c6cc9954cf3"} Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.805586 4799 scope.go:117] "RemoveContainer" containerID="0ad3295a07b4a07cfe006a25ca47edae4a8af8692004d88f32135df24c3fad0c" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.805479 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.808372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01","Type":"ContainerDied","Data":"5df315f7cfcd6b1318a440fe9885035989f1dc72d2ca4e89c9c6e42590a5876c"} Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.808443 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.843052 4799 scope.go:117] "RemoveContainer" containerID="65e228536f76b66350fefe1ff284534af82751994e950d51777e241018e1dbf3" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.855191 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.871042 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.887422 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.911287 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.911586 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: E0216 12:55:03.912039 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-log" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912099 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-log" Feb 16 12:55:03 crc kubenswrapper[4799]: E0216 12:55:03.912209 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-log" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912272 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-log" Feb 16 12:55:03 crc kubenswrapper[4799]: E0216 12:55:03.912331 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6474380-de01-4e68-bcea-caf2ce9bb2aa" containerName="nova-manage" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912383 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6474380-de01-4e68-bcea-caf2ce9bb2aa" containerName="nova-manage" Feb 16 12:55:03 crc kubenswrapper[4799]: E0216 12:55:03.912436 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-metadata" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912482 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-metadata" Feb 16 12:55:03 crc kubenswrapper[4799]: E0216 12:55:03.912532 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-api" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912596 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-api" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912824 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6474380-de01-4e68-bcea-caf2ce9bb2aa" containerName="nova-manage" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912880 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-api" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912934 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" containerName="nova-api-log" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.912987 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-metadata" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.913036 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" containerName="nova-metadata-log" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.913987 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.914152 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.921420 4799 scope.go:117] "RemoveContainer" containerID="973f23b0608257c8513aff7563fa649fb7f20a418e4b587c6edf9cfc890d50c2" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.925215 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.925409 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.925788 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.927833 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.931243 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.931560 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.931898 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.959561 4799 scope.go:117] "RemoveContainer" containerID="ed39cac0c1fdaf791042a69457f1ac122032a9b62b834185ad2e362cbf11b0ff" Feb 16 12:55:03 crc kubenswrapper[4799]: I0216 12:55:03.971534 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.049705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050084 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050189 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-config-data\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050311 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050399 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050511 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kblkh\" (UniqueName: \"kubernetes.io/projected/14e134b2-1c07-4a20-9bc6-ea4c75878094-kube-api-access-kblkh\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050634 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gmjd\" (UniqueName: \"kubernetes.io/projected/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-kube-api-access-5gmjd\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050807 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-logs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050927 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-config-data\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.050976 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e134b2-1c07-4a20-9bc6-ea4c75878094-logs\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.051001 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.152939 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-logs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153009 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-config-data\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153052 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e134b2-1c07-4a20-9bc6-ea4c75878094-logs\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153078 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153106 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153153 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153174 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-config-data\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153196 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153212 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153231 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kblkh\" (UniqueName: \"kubernetes.io/projected/14e134b2-1c07-4a20-9bc6-ea4c75878094-kube-api-access-kblkh\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.153279 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gmjd\" (UniqueName: \"kubernetes.io/projected/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-kube-api-access-5gmjd\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.154094 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e134b2-1c07-4a20-9bc6-ea4c75878094-logs\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.154895 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-logs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.159642 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.161046 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-config-data\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.168262 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-config-data\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.170259 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.170406 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.172209 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.173802 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/14e134b2-1c07-4a20-9bc6-ea4c75878094-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.174680 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kblkh\" (UniqueName: \"kubernetes.io/projected/14e134b2-1c07-4a20-9bc6-ea4c75878094-kube-api-access-kblkh\") pod \"nova-metadata-0\" (UID: \"14e134b2-1c07-4a20-9bc6-ea4c75878094\") " pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.175715 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gmjd\" (UniqueName: \"kubernetes.io/projected/9b1e557e-1e13-4d03-a4b9-fddccf7fc783-kube-api-access-5gmjd\") pod \"nova-api-0\" (UID: \"9b1e557e-1e13-4d03-a4b9-fddccf7fc783\") " pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.260859 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.269832 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.337361 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.461324 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-combined-ca-bundle\") pod \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.461511 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-config-data\") pod \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.461591 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88p9q\" (UniqueName: \"kubernetes.io/projected/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-kube-api-access-88p9q\") pod \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\" (UID: \"f52827a2-06e0-4f60-ac3d-2efdc2b182d4\") " Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.469332 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-kube-api-access-88p9q" (OuterVolumeSpecName: "kube-api-access-88p9q") pod "f52827a2-06e0-4f60-ac3d-2efdc2b182d4" (UID: "f52827a2-06e0-4f60-ac3d-2efdc2b182d4"). InnerVolumeSpecName "kube-api-access-88p9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.501792 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f52827a2-06e0-4f60-ac3d-2efdc2b182d4" (UID: "f52827a2-06e0-4f60-ac3d-2efdc2b182d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.503411 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-config-data" (OuterVolumeSpecName: "config-data") pod "f52827a2-06e0-4f60-ac3d-2efdc2b182d4" (UID: "f52827a2-06e0-4f60-ac3d-2efdc2b182d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.563976 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.564114 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.564202 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88p9q\" (UniqueName: \"kubernetes.io/projected/f52827a2-06e0-4f60-ac3d-2efdc2b182d4-kube-api-access-88p9q\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.771610 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.825593 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"14e134b2-1c07-4a20-9bc6-ea4c75878094","Type":"ContainerStarted","Data":"5dcf2e4b68eb01f6fef1511a8730c4c1e48dad404c68e0238101390060cbfae8"} Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.830630 4799 generic.go:334] "Generic (PLEG): container finished" podID="f52827a2-06e0-4f60-ac3d-2efdc2b182d4" containerID="426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2" exitCode=0 Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.830692 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f52827a2-06e0-4f60-ac3d-2efdc2b182d4","Type":"ContainerDied","Data":"426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2"} Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.830723 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.830748 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f52827a2-06e0-4f60-ac3d-2efdc2b182d4","Type":"ContainerDied","Data":"59fc67425abf43ae3e73d8453b71e080ec79795ccd33ccbb8f8d0a1ab4ae4b7f"} Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.830769 4799 scope.go:117] "RemoveContainer" containerID="426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.846645 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 12:55:04 crc kubenswrapper[4799]: W0216 12:55:04.857475 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b1e557e_1e13_4d03_a4b9_fddccf7fc783.slice/crio-8b85c3fb18732d3a881461ca1ba72be8a48cf3c9ed5597cf920687dd18702886 WatchSource:0}: Error finding container 8b85c3fb18732d3a881461ca1ba72be8a48cf3c9ed5597cf920687dd18702886: Status 404 returned error can't find the container with id 8b85c3fb18732d3a881461ca1ba72be8a48cf3c9ed5597cf920687dd18702886 Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.978552 4799 scope.go:117] "RemoveContainer" containerID="426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2" Feb 16 12:55:04 crc kubenswrapper[4799]: E0216 12:55:04.978991 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2\": container with ID starting with 426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2 not found: ID does not exist" containerID="426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2" Feb 16 12:55:04 crc kubenswrapper[4799]: I0216 12:55:04.979035 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2"} err="failed to get container status \"426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2\": rpc error: code = NotFound desc = could not find container \"426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2\": container with ID starting with 426b14be7bf617bb9ca8cc4bb7556d713692a400b7fc6ceb4fe66161e53829e2 not found: ID does not exist" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.012759 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.030928 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.045672 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:55:05 crc kubenswrapper[4799]: E0216 12:55:05.046215 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52827a2-06e0-4f60-ac3d-2efdc2b182d4" containerName="nova-scheduler-scheduler" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.046241 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52827a2-06e0-4f60-ac3d-2efdc2b182d4" containerName="nova-scheduler-scheduler" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.046468 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f52827a2-06e0-4f60-ac3d-2efdc2b182d4" containerName="nova-scheduler-scheduler" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.047293 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.049678 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.060476 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.168220 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01" path="/var/lib/kubelet/pods/5e2b3fcf-00ec-4d11-9d47-b1aeb9b33a01/volumes" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.169194 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d723753-db9a-4e04-ab35-5949e0af15fa" path="/var/lib/kubelet/pods/9d723753-db9a-4e04-ab35-5949e0af15fa/volumes" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.169846 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52827a2-06e0-4f60-ac3d-2efdc2b182d4" path="/var/lib/kubelet/pods/f52827a2-06e0-4f60-ac3d-2efdc2b182d4/volumes" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.176866 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dpxg\" (UniqueName: \"kubernetes.io/projected/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-kube-api-access-6dpxg\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.177138 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.177174 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-config-data\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.281821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dpxg\" (UniqueName: \"kubernetes.io/projected/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-kube-api-access-6dpxg\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.281907 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.281965 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-config-data\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.287679 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-config-data\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.288287 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.316475 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dpxg\" (UniqueName: \"kubernetes.io/projected/9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53-kube-api-access-6dpxg\") pod \"nova-scheduler-0\" (UID: \"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53\") " pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.424796 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.842883 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b1e557e-1e13-4d03-a4b9-fddccf7fc783","Type":"ContainerStarted","Data":"504acc4002de5f6a094722ffb4fc296ab8c34742f7012b2542ee4c5be05c6d96"} Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.844227 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b1e557e-1e13-4d03-a4b9-fddccf7fc783","Type":"ContainerStarted","Data":"2e8280dff4eb7526f808d607a514b33eaee17ac2a4e05a168bfb858c99194949"} Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.844331 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b1e557e-1e13-4d03-a4b9-fddccf7fc783","Type":"ContainerStarted","Data":"8b85c3fb18732d3a881461ca1ba72be8a48cf3c9ed5597cf920687dd18702886"} Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.852442 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"14e134b2-1c07-4a20-9bc6-ea4c75878094","Type":"ContainerStarted","Data":"8f974cfa803dba2019eee78b4aa4da653c339d3049da1824a9e272cc12c49855"} Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.852506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"14e134b2-1c07-4a20-9bc6-ea4c75878094","Type":"ContainerStarted","Data":"394707b8fa49bbcf8c88d88d957f28de699a171ecb11b1d934f04d5cb6c494c3"} Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.866736 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.866716523 podStartE2EDuration="2.866716523s" podCreationTimestamp="2026-02-16 12:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:05.864431407 +0000 UTC m=+1411.457446751" watchObservedRunningTime="2026-02-16 12:55:05.866716523 +0000 UTC m=+1411.459731857" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.897927 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.897898445 podStartE2EDuration="2.897898445s" podCreationTimestamp="2026-02-16 12:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:05.888075624 +0000 UTC m=+1411.481090948" watchObservedRunningTime="2026-02-16 12:55:05.897898445 +0000 UTC m=+1411.490913789" Feb 16 12:55:05 crc kubenswrapper[4799]: I0216 12:55:05.919870 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 12:55:06 crc kubenswrapper[4799]: I0216 12:55:06.866565 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53","Type":"ContainerStarted","Data":"084b40aba753084efd4972692495efa60e2f36f1cb84a6298b63c71291c91e44"} Feb 16 12:55:06 crc kubenswrapper[4799]: I0216 12:55:06.866942 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53","Type":"ContainerStarted","Data":"230c142634a3e323c5414dda7dd5800dcb2da5136d071b6f687a378fcb72cf14"} Feb 16 12:55:06 crc kubenswrapper[4799]: I0216 12:55:06.883438 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.883419068 podStartE2EDuration="1.883419068s" podCreationTimestamp="2026-02-16 12:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:06.881415761 +0000 UTC m=+1412.474431105" watchObservedRunningTime="2026-02-16 12:55:06.883419068 +0000 UTC m=+1412.476434402" Feb 16 12:55:09 crc kubenswrapper[4799]: I0216 12:55:09.261807 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 12:55:09 crc kubenswrapper[4799]: I0216 12:55:09.262378 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 12:55:10 crc kubenswrapper[4799]: I0216 12:55:10.425092 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 12:55:11 crc kubenswrapper[4799]: I0216 12:55:11.286948 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:55:11 crc kubenswrapper[4799]: I0216 12:55:11.341715 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsv5h" Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.291919 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsv5h"] Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.472692 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxfqt"] Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.473329 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lxfqt" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="registry-server" containerID="cri-o://31a1a63daeefecc31f77e6170ebb5b05a0576d67fb9f77b40f3d7ac483c76292" gracePeriod=2 Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.935131 4799 generic.go:334] "Generic (PLEG): container finished" podID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerID="31a1a63daeefecc31f77e6170ebb5b05a0576d67fb9f77b40f3d7ac483c76292" exitCode=0 Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.935209 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxfqt" event={"ID":"d89b19db-d98a-4004-b73c-8bb54ddf632d","Type":"ContainerDied","Data":"31a1a63daeefecc31f77e6170ebb5b05a0576d67fb9f77b40f3d7ac483c76292"} Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.935493 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxfqt" event={"ID":"d89b19db-d98a-4004-b73c-8bb54ddf632d","Type":"ContainerDied","Data":"ffd7103119cb4a2e9eaee70d6dff5010c3adfed6ea77df3f561740a55f41cf21"} Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.935514 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd7103119cb4a2e9eaee70d6dff5010c3adfed6ea77df3f561740a55f41cf21" Feb 16 12:55:12 crc kubenswrapper[4799]: I0216 12:55:12.998341 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m952t" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" probeResult="failure" output=< Feb 16 12:55:12 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:55:12 crc kubenswrapper[4799]: > Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.001750 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.165552 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-utilities\") pod \"d89b19db-d98a-4004-b73c-8bb54ddf632d\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.166185 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-catalog-content\") pod \"d89b19db-d98a-4004-b73c-8bb54ddf632d\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.166224 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqfxd\" (UniqueName: \"kubernetes.io/projected/d89b19db-d98a-4004-b73c-8bb54ddf632d-kube-api-access-rqfxd\") pod \"d89b19db-d98a-4004-b73c-8bb54ddf632d\" (UID: \"d89b19db-d98a-4004-b73c-8bb54ddf632d\") " Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.167932 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-utilities" (OuterVolumeSpecName: "utilities") pod "d89b19db-d98a-4004-b73c-8bb54ddf632d" (UID: "d89b19db-d98a-4004-b73c-8bb54ddf632d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.183063 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89b19db-d98a-4004-b73c-8bb54ddf632d-kube-api-access-rqfxd" (OuterVolumeSpecName: "kube-api-access-rqfxd") pod "d89b19db-d98a-4004-b73c-8bb54ddf632d" (UID: "d89b19db-d98a-4004-b73c-8bb54ddf632d"). InnerVolumeSpecName "kube-api-access-rqfxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.248110 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d89b19db-d98a-4004-b73c-8bb54ddf632d" (UID: "d89b19db-d98a-4004-b73c-8bb54ddf632d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.268604 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.268653 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqfxd\" (UniqueName: \"kubernetes.io/projected/d89b19db-d98a-4004-b73c-8bb54ddf632d-kube-api-access-rqfxd\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.268664 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b19db-d98a-4004-b73c-8bb54ddf632d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.945035 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxfqt" Feb 16 12:55:13 crc kubenswrapper[4799]: I0216 12:55:13.993050 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxfqt"] Feb 16 12:55:14 crc kubenswrapper[4799]: I0216 12:55:14.003075 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lxfqt"] Feb 16 12:55:14 crc kubenswrapper[4799]: I0216 12:55:14.261637 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 12:55:14 crc kubenswrapper[4799]: I0216 12:55:14.261707 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 12:55:14 crc kubenswrapper[4799]: I0216 12:55:14.270990 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:55:14 crc kubenswrapper[4799]: I0216 12:55:14.271049 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.166522 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" path="/var/lib/kubelet/pods/d89b19db-d98a-4004-b73c-8bb54ddf632d/volumes" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.276365 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="14e134b2-1c07-4a20-9bc6-ea4c75878094" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.235:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.276472 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="14e134b2-1c07-4a20-9bc6-ea4c75878094" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.235:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.287334 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b1e557e-1e13-4d03-a4b9-fddccf7fc783" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.236:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.287358 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b1e557e-1e13-4d03-a4b9-fddccf7fc783" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.236:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.425623 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.475952 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 12:55:15 crc kubenswrapper[4799]: I0216 12:55:15.962451 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 12:55:16 crc kubenswrapper[4799]: I0216 12:55:16.033215 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 12:55:21 crc kubenswrapper[4799]: I0216 12:55:21.792557 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:55:21 crc kubenswrapper[4799]: I0216 12:55:21.793157 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:55:23 crc kubenswrapper[4799]: I0216 12:55:23.006432 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m952t" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" probeResult="failure" output=< Feb 16 12:55:23 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 12:55:23 crc kubenswrapper[4799]: > Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.270628 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.272722 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.281932 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.292065 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.292195 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.292672 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.292729 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.318995 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 12:55:24 crc kubenswrapper[4799]: I0216 12:55:24.322727 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 12:55:25 crc kubenswrapper[4799]: I0216 12:55:25.067235 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 12:55:32 crc kubenswrapper[4799]: I0216 12:55:32.025725 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:55:32 crc kubenswrapper[4799]: I0216 12:55:32.086401 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.265791 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m952t"] Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.266049 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m952t" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" containerID="cri-o://43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2" gracePeriod=2 Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.746262 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.814939 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6gff\" (UniqueName: \"kubernetes.io/projected/dd9766ad-b126-4eff-bd30-0ffedfcff830-kube-api-access-z6gff\") pod \"dd9766ad-b126-4eff-bd30-0ffedfcff830\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.815054 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-utilities\") pod \"dd9766ad-b126-4eff-bd30-0ffedfcff830\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.815098 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-catalog-content\") pod \"dd9766ad-b126-4eff-bd30-0ffedfcff830\" (UID: \"dd9766ad-b126-4eff-bd30-0ffedfcff830\") " Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.815811 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-utilities" (OuterVolumeSpecName: "utilities") pod "dd9766ad-b126-4eff-bd30-0ffedfcff830" (UID: "dd9766ad-b126-4eff-bd30-0ffedfcff830"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.822835 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9766ad-b126-4eff-bd30-0ffedfcff830-kube-api-access-z6gff" (OuterVolumeSpecName: "kube-api-access-z6gff") pod "dd9766ad-b126-4eff-bd30-0ffedfcff830" (UID: "dd9766ad-b126-4eff-bd30-0ffedfcff830"). InnerVolumeSpecName "kube-api-access-z6gff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.916732 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6gff\" (UniqueName: \"kubernetes.io/projected/dd9766ad-b126-4eff-bd30-0ffedfcff830-kube-api-access-z6gff\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.916768 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:33 crc kubenswrapper[4799]: I0216 12:55:33.969593 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd9766ad-b126-4eff-bd30-0ffedfcff830" (UID: "dd9766ad-b126-4eff-bd30-0ffedfcff830"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.018977 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd9766ad-b126-4eff-bd30-0ffedfcff830-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.173775 4799 generic.go:334] "Generic (PLEG): container finished" podID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerID="43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2" exitCode=0 Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.173870 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m952t" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.173854 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerDied","Data":"43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2"} Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.174061 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m952t" event={"ID":"dd9766ad-b126-4eff-bd30-0ffedfcff830","Type":"ContainerDied","Data":"5d0ff0484805be010d5685a2068521138719b8c02cc53f24e9dbdd7649473e98"} Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.174108 4799 scope.go:117] "RemoveContainer" containerID="43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.225201 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m952t"] Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.228005 4799 scope.go:117] "RemoveContainer" containerID="b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.237092 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m952t"] Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.261800 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.271435 4799 scope.go:117] "RemoveContainer" containerID="1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.369342 4799 scope.go:117] "RemoveContainer" containerID="43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2" Feb 16 12:55:34 crc kubenswrapper[4799]: E0216 12:55:34.374996 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2\": container with ID starting with 43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2 not found: ID does not exist" containerID="43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.375055 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2"} err="failed to get container status \"43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2\": rpc error: code = NotFound desc = could not find container \"43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2\": container with ID starting with 43ba684667efd12fd30f523f651d9f3cd25e30172e9dee17f9c38da76214bbd2 not found: ID does not exist" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.375088 4799 scope.go:117] "RemoveContainer" containerID="b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c" Feb 16 12:55:34 crc kubenswrapper[4799]: E0216 12:55:34.378762 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c\": container with ID starting with b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c not found: ID does not exist" containerID="b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.378834 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c"} err="failed to get container status \"b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c\": rpc error: code = NotFound desc = could not find container \"b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c\": container with ID starting with b2449187f6428a1da011bf53963aa59b72d0ab7490d46496bf1c2d21317dc38c not found: ID does not exist" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.378874 4799 scope.go:117] "RemoveContainer" containerID="1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689" Feb 16 12:55:34 crc kubenswrapper[4799]: E0216 12:55:34.379242 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689\": container with ID starting with 1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689 not found: ID does not exist" containerID="1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689" Feb 16 12:55:34 crc kubenswrapper[4799]: I0216 12:55:34.379283 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689"} err="failed to get container status \"1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689\": rpc error: code = NotFound desc = could not find container \"1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689\": container with ID starting with 1adcd8d4bc2104a9e2b12410bc8187c2812b7560e7961ee39b30655deac8c689 not found: ID does not exist" Feb 16 12:55:35 crc kubenswrapper[4799]: I0216 12:55:35.073881 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:55:35 crc kubenswrapper[4799]: I0216 12:55:35.162833 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" path="/var/lib/kubelet/pods/dd9766ad-b126-4eff-bd30-0ffedfcff830/volumes" Feb 16 12:55:37 crc kubenswrapper[4799]: I0216 12:55:37.695909 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="rabbitmq" containerID="cri-o://81152b23dfbe435acd5f67f4e28899693c445d87d9cafae393f5f1445510a537" gracePeriod=604797 Feb 16 12:55:37 crc kubenswrapper[4799]: I0216 12:55:37.922581 4799 scope.go:117] "RemoveContainer" containerID="2444c82e4c6252f03f69b26b2fb14fb39dfecd147153b07aabe046ad48801351" Feb 16 12:55:37 crc kubenswrapper[4799]: I0216 12:55:37.953094 4799 scope.go:117] "RemoveContainer" containerID="4bda4f5356e5c224e7793a3446e1c92163f4e69c5c711dd530e643741f53d58d" Feb 16 12:55:37 crc kubenswrapper[4799]: I0216 12:55:37.996533 4799 scope.go:117] "RemoveContainer" containerID="31a1a63daeefecc31f77e6170ebb5b05a0576d67fb9f77b40f3d7ac483c76292" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.458389 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="rabbitmq" containerID="cri-o://5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc" gracePeriod=604797 Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.824496 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wdmt7"] Feb 16 12:55:38 crc kubenswrapper[4799]: E0216 12:55:38.825209 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="extract-utilities" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825233 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="extract-utilities" Feb 16 12:55:38 crc kubenswrapper[4799]: E0216 12:55:38.825272 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825314 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" Feb 16 12:55:38 crc kubenswrapper[4799]: E0216 12:55:38.825336 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="extract-utilities" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825350 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="extract-utilities" Feb 16 12:55:38 crc kubenswrapper[4799]: E0216 12:55:38.825378 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="registry-server" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825389 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="registry-server" Feb 16 12:55:38 crc kubenswrapper[4799]: E0216 12:55:38.825412 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="extract-content" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825422 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="extract-content" Feb 16 12:55:38 crc kubenswrapper[4799]: E0216 12:55:38.825448 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="extract-content" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825459 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="extract-content" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825768 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89b19db-d98a-4004-b73c-8bb54ddf632d" containerName="registry-server" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.825821 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9766ad-b126-4eff-bd30-0ffedfcff830" containerName="registry-server" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.828526 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.838025 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wdmt7"] Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.944311 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-utilities\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.944450 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbj76\" (UniqueName: \"kubernetes.io/projected/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-kube-api-access-nbj76\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:38 crc kubenswrapper[4799]: I0216 12:55:38.944579 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-catalog-content\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.046373 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-utilities\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.046764 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbj76\" (UniqueName: \"kubernetes.io/projected/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-kube-api-access-nbj76\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.046871 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-catalog-content\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.047176 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-utilities\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.048747 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-catalog-content\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.077444 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbj76\" (UniqueName: \"kubernetes.io/projected/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-kube-api-access-nbj76\") pod \"community-operators-wdmt7\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.262340 4799 generic.go:334] "Generic (PLEG): container finished" podID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerID="81152b23dfbe435acd5f67f4e28899693c445d87d9cafae393f5f1445510a537" exitCode=0 Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.262395 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8af3fbd4-c626-4920-915d-0f50d12662b6","Type":"ContainerDied","Data":"81152b23dfbe435acd5f67f4e28899693c445d87d9cafae393f5f1445510a537"} Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.296878 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.413367 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.560909 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-plugins\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.560968 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8af3fbd4-c626-4920-915d-0f50d12662b6-pod-info\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561030 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-erlang-cookie\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561058 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-confd\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561151 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-config-data\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561176 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-server-conf\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561253 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8af3fbd4-c626-4920-915d-0f50d12662b6-erlang-cookie-secret\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561287 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs5br\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-kube-api-access-bs5br\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561365 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-plugins-conf\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561388 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-tls\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.561430 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"8af3fbd4-c626-4920-915d-0f50d12662b6\" (UID: \"8af3fbd4-c626-4920-915d-0f50d12662b6\") " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.572765 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.573823 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.575109 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.583806 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8af3fbd4-c626-4920-915d-0f50d12662b6-pod-info" (OuterVolumeSpecName: "pod-info") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.584375 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.590552 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-kube-api-access-bs5br" (OuterVolumeSpecName: "kube-api-access-bs5br") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "kube-api-access-bs5br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.596296 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af3fbd4-c626-4920-915d-0f50d12662b6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.601729 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.632948 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-config-data" (OuterVolumeSpecName: "config-data") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664726 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664767 4799 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8af3fbd4-c626-4920-915d-0f50d12662b6-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664779 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664791 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664801 4799 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8af3fbd4-c626-4920-915d-0f50d12662b6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664811 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs5br\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-kube-api-access-bs5br\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664822 4799 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664830 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.664850 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.696292 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.702488 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-server-conf" (OuterVolumeSpecName: "server-conf") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.768268 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.768303 4799 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8af3fbd4-c626-4920-915d-0f50d12662b6-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.777323 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8af3fbd4-c626-4920-915d-0f50d12662b6" (UID: "8af3fbd4-c626-4920-915d-0f50d12662b6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.869740 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8af3fbd4-c626-4920-915d-0f50d12662b6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:39 crc kubenswrapper[4799]: I0216 12:55:39.918193 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wdmt7"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.172367 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.282575 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerStarted","Data":"97d576c2e67916ea8c50f2c39e503b8ebe70640aa89466260df4631305725454"} Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290070 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-confd\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290303 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-config-data\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290323 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-tls\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290360 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-pod-info\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290379 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290415 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk49s\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-kube-api-access-wk49s\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290454 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-erlang-cookie-secret\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290477 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-erlang-cookie\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290501 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-plugins-conf\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290516 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-server-conf\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.290552 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-plugins\") pod \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\" (UID: \"1e3da06f-f1ef-4b8c-963b-0994cde5fab7\") " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.291664 4799 generic.go:334] "Generic (PLEG): container finished" podID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerID="5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc" exitCode=0 Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.292291 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e3da06f-f1ef-4b8c-963b-0994cde5fab7","Type":"ContainerDied","Data":"5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc"} Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.292330 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e3da06f-f1ef-4b8c-963b-0994cde5fab7","Type":"ContainerDied","Data":"311f57f1e3aafc08e1fbef6b908f2a0489b7d0ee2b596f1fd1f53eaf5a0966d8"} Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.292348 4799 scope.go:117] "RemoveContainer" containerID="5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.292497 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.293316 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.298004 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.298714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.300273 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-pod-info" (OuterVolumeSpecName: "pod-info") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.300589 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.310361 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.310560 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-kube-api-access-wk49s" (OuterVolumeSpecName: "kube-api-access-wk49s") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "kube-api-access-wk49s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.311463 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8af3fbd4-c626-4920-915d-0f50d12662b6","Type":"ContainerDied","Data":"a1a9f9017debaf754cc04fd8a11a038e20627d55df5263e2f9aa26a3e4d064bd"} Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.311568 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.312334 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.351587 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-config-data" (OuterVolumeSpecName: "config-data") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.382561 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-server-conf" (OuterVolumeSpecName: "server-conf") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393277 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393321 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393334 4799 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393369 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393386 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk49s\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-kube-api-access-wk49s\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393400 4799 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393413 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393424 4799 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393538 4799 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.393552 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.434354 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1e3da06f-f1ef-4b8c-963b-0994cde5fab7" (UID: "1e3da06f-f1ef-4b8c-963b-0994cde5fab7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.440914 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.485309 4799 scope.go:117] "RemoveContainer" containerID="1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.489783 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.503304 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.503598 4799 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e3da06f-f1ef-4b8c-963b-0994cde5fab7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.504187 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.520225 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: E0216 12:55:40.520724 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="setup-container" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.520742 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="setup-container" Feb 16 12:55:40 crc kubenswrapper[4799]: E0216 12:55:40.520760 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="setup-container" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.520766 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="setup-container" Feb 16 12:55:40 crc kubenswrapper[4799]: E0216 12:55:40.520786 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="rabbitmq" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.520792 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="rabbitmq" Feb 16 12:55:40 crc kubenswrapper[4799]: E0216 12:55:40.520822 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="rabbitmq" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.520830 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="rabbitmq" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.521012 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" containerName="rabbitmq" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.521026 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" containerName="rabbitmq" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.522046 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.528875 4799 scope.go:117] "RemoveContainer" containerID="5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.528902 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.528922 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529067 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529224 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529421 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zghbx" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529667 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 12:55:40 crc kubenswrapper[4799]: E0216 12:55:40.529684 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc\": container with ID starting with 5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc not found: ID does not exist" containerID="5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529739 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc"} err="failed to get container status \"5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc\": rpc error: code = NotFound desc = could not find container \"5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc\": container with ID starting with 5be9fa09b3ed3fb0b10e2811273f4aff8f5e2e1539fb2410487c095ecc2df5fc not found: ID does not exist" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529766 4799 scope.go:117] "RemoveContainer" containerID="1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.529811 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 12:55:40 crc kubenswrapper[4799]: E0216 12:55:40.532506 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096\": container with ID starting with 1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096 not found: ID does not exist" containerID="1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.532547 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096"} err="failed to get container status \"1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096\": rpc error: code = NotFound desc = could not find container \"1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096\": container with ID starting with 1b839d6cad87299d0564d541fa139b0f00f0dbf59adf92e913b9fa2f82e15096 not found: ID does not exist" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.532574 4799 scope.go:117] "RemoveContainer" containerID="81152b23dfbe435acd5f67f4e28899693c445d87d9cafae393f5f1445510a537" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.547519 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.560048 4799 scope.go:117] "RemoveContainer" containerID="ab78b8d9b5f8e466b857a5f3123961b938a51fbc0fdeca53ac77857645a6278b" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.631851 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.679033 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709211 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709447 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709565 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709683 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709717 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a6be377-3c2d-46ab-a9b1-3faa91644a58-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709753 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5q2\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-kube-api-access-nc5q2\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.709994 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.710082 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a6be377-3c2d-46ab-a9b1-3faa91644a58-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.710335 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.710444 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.711231 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.715699 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.719521 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.719761 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7r8ht" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.719914 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.720086 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.720887 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.721012 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.721068 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.730778 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.812483 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.812705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.812820 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.813551 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.813605 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821588 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821711 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52adb145-1b05-4515-a214-83731e3504b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821783 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821864 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821899 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a6be377-3c2d-46ab-a9b1-3faa91644a58-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821936 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5q2\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-kube-api-access-nc5q2\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.821960 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52adb145-1b05-4515-a214-83731e3504b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822073 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822161 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822233 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a6be377-3c2d-46ab-a9b1-3faa91644a58-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822341 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5r9f\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-kube-api-access-w5r9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822457 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822490 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822539 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.823178 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.824736 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.824955 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.825654 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a6be377-3c2d-46ab-a9b1-3faa91644a58-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.822608 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.826345 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.826400 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.827656 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a6be377-3c2d-46ab-a9b1-3faa91644a58-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.827832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a6be377-3c2d-46ab-a9b1-3faa91644a58-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.829754 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.832337 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.847279 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5q2\" (UniqueName: \"kubernetes.io/projected/7a6be377-3c2d-46ab-a9b1-3faa91644a58-kube-api-access-nc5q2\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.862394 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6be377-3c2d-46ab-a9b1-3faa91644a58\") " pod="openstack/rabbitmq-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928744 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5r9f\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-kube-api-access-w5r9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928809 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928854 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928872 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928892 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928933 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928960 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.928997 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52adb145-1b05-4515-a214-83731e3504b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.929014 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.929037 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52adb145-1b05-4515-a214-83731e3504b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.929066 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.929823 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.930153 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.930836 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.931144 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.931526 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.932247 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52adb145-1b05-4515-a214-83731e3504b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.938727 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52adb145-1b05-4515-a214-83731e3504b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.939026 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52adb145-1b05-4515-a214-83731e3504b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.941379 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.949839 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.951966 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5r9f\" (UniqueName: \"kubernetes.io/projected/52adb145-1b05-4515-a214-83731e3504b4-kube-api-access-w5r9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:40 crc kubenswrapper[4799]: I0216 12:55:40.967637 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52adb145-1b05-4515-a214-83731e3504b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.045412 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.156767 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.169011 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3da06f-f1ef-4b8c-963b-0994cde5fab7" path="/var/lib/kubelet/pods/1e3da06f-f1ef-4b8c-963b-0994cde5fab7/volumes" Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.172526 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af3fbd4-c626-4920-915d-0f50d12662b6" path="/var/lib/kubelet/pods/8af3fbd4-c626-4920-915d-0f50d12662b6/volumes" Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.342927 4799 generic.go:334] "Generic (PLEG): container finished" podID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerID="0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461" exitCode=0 Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.343047 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerDied","Data":"0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461"} Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.538062 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 12:55:41 crc kubenswrapper[4799]: I0216 12:55:41.681298 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 12:55:41 crc kubenswrapper[4799]: W0216 12:55:41.687214 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6be377_3c2d_46ab_a9b1_3faa91644a58.slice/crio-d035fe802c3b71375992fcfd04fbed069923d5b6a0044a786506222fac92839f WatchSource:0}: Error finding container d035fe802c3b71375992fcfd04fbed069923d5b6a0044a786506222fac92839f: Status 404 returned error can't find the container with id d035fe802c3b71375992fcfd04fbed069923d5b6a0044a786506222fac92839f Feb 16 12:55:42 crc kubenswrapper[4799]: I0216 12:55:42.358803 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6be377-3c2d-46ab-a9b1-3faa91644a58","Type":"ContainerStarted","Data":"d035fe802c3b71375992fcfd04fbed069923d5b6a0044a786506222fac92839f"} Feb 16 12:55:42 crc kubenswrapper[4799]: I0216 12:55:42.362169 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerStarted","Data":"4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7"} Feb 16 12:55:42 crc kubenswrapper[4799]: I0216 12:55:42.363617 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52adb145-1b05-4515-a214-83731e3504b4","Type":"ContainerStarted","Data":"7fed2fa5d1dd3164f0430236176a495ccfba64cdb482567c191feadcca883f19"} Feb 16 12:55:43 crc kubenswrapper[4799]: I0216 12:55:43.378275 4799 generic.go:334] "Generic (PLEG): container finished" podID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerID="4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7" exitCode=0 Feb 16 12:55:43 crc kubenswrapper[4799]: I0216 12:55:43.378348 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerDied","Data":"4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7"} Feb 16 12:55:43 crc kubenswrapper[4799]: I0216 12:55:43.382723 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52adb145-1b05-4515-a214-83731e3504b4","Type":"ContainerStarted","Data":"0054765a23dbcfe48f410fb53bb8fc167dfba992a356b8da7fea81d8c57a3802"} Feb 16 12:55:44 crc kubenswrapper[4799]: I0216 12:55:44.395894 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6be377-3c2d-46ab-a9b1-3faa91644a58","Type":"ContainerStarted","Data":"2e93efd60c857f4f7e5b37e614760cbfd8bc4b81d7307670383cf37ac42ebf13"} Feb 16 12:55:44 crc kubenswrapper[4799]: I0216 12:55:44.398582 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerStarted","Data":"9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb"} Feb 16 12:55:44 crc kubenswrapper[4799]: I0216 12:55:44.452753 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wdmt7" podStartSLOduration=3.8782138 podStartE2EDuration="6.452728127s" podCreationTimestamp="2026-02-16 12:55:38 +0000 UTC" firstStartedPulling="2026-02-16 12:55:41.34766248 +0000 UTC m=+1446.940677814" lastFinishedPulling="2026-02-16 12:55:43.922176807 +0000 UTC m=+1449.515192141" observedRunningTime="2026-02-16 12:55:44.445767769 +0000 UTC m=+1450.038783103" watchObservedRunningTime="2026-02-16 12:55:44.452728127 +0000 UTC m=+1450.045743471" Feb 16 12:55:49 crc kubenswrapper[4799]: I0216 12:55:49.297425 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:49 crc kubenswrapper[4799]: I0216 12:55:49.298053 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:49 crc kubenswrapper[4799]: I0216 12:55:49.346177 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:49 crc kubenswrapper[4799]: I0216 12:55:49.500884 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:49 crc kubenswrapper[4799]: I0216 12:55:49.597461 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wdmt7"] Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.355094 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65ffbf6dcf-vl8nc"] Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.356985 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.359513 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.373297 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65ffbf6dcf-vl8nc"] Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.450601 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-openstack-edpm-ipam\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.450661 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-sb\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.450688 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnqr\" (UniqueName: \"kubernetes.io/projected/5aac54cb-3650-498f-8981-c3e2d4c395a0-kube-api-access-cgnqr\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.450882 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-config\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.450970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-swift-storage-0\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.451037 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-nb\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.451070 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-svc\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.553193 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-swift-storage-0\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.553293 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-nb\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.553332 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-svc\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554406 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-swift-storage-0\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554439 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-svc\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554559 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-openstack-edpm-ipam\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554594 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-sb\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554618 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgnqr\" (UniqueName: \"kubernetes.io/projected/5aac54cb-3650-498f-8981-c3e2d4c395a0-kube-api-access-cgnqr\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554675 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-nb\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.554852 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-config\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.555319 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-sb\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.556013 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-config\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.556054 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-openstack-edpm-ipam\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.581185 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgnqr\" (UniqueName: \"kubernetes.io/projected/5aac54cb-3650-498f-8981-c3e2d4c395a0-kube-api-access-cgnqr\") pod \"dnsmasq-dns-65ffbf6dcf-vl8nc\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:50 crc kubenswrapper[4799]: I0216 12:55:50.679184 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.095289 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65ffbf6dcf-vl8nc"] Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.472732 4799 generic.go:334] "Generic (PLEG): container finished" podID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerID="f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75" exitCode=0 Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.472939 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" event={"ID":"5aac54cb-3650-498f-8981-c3e2d4c395a0","Type":"ContainerDied","Data":"f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75"} Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.473208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" event={"ID":"5aac54cb-3650-498f-8981-c3e2d4c395a0","Type":"ContainerStarted","Data":"f0f9ad6186e940864c6eb1480979146a29b121dadf96747abe3cb4d9da70d209"} Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.473668 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wdmt7" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="registry-server" containerID="cri-o://9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb" gracePeriod=2 Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.792560 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.792880 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:55:51 crc kubenswrapper[4799]: I0216 12:55:51.991280 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.092709 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-catalog-content\") pod \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.092781 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbj76\" (UniqueName: \"kubernetes.io/projected/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-kube-api-access-nbj76\") pod \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.092868 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-utilities\") pod \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\" (UID: \"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83\") " Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.093819 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-utilities" (OuterVolumeSpecName: "utilities") pod "20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" (UID: "20fb551c-1259-4cf8-b8ec-9dcd1fc88c83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.104420 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-kube-api-access-nbj76" (OuterVolumeSpecName: "kube-api-access-nbj76") pod "20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" (UID: "20fb551c-1259-4cf8-b8ec-9dcd1fc88c83"). InnerVolumeSpecName "kube-api-access-nbj76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.165650 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" (UID: "20fb551c-1259-4cf8-b8ec-9dcd1fc88c83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.195043 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.195074 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbj76\" (UniqueName: \"kubernetes.io/projected/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-kube-api-access-nbj76\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.195084 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.485697 4799 generic.go:334] "Generic (PLEG): container finished" podID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerID="9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb" exitCode=0 Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.485771 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdmt7" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.485764 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerDied","Data":"9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb"} Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.486183 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdmt7" event={"ID":"20fb551c-1259-4cf8-b8ec-9dcd1fc88c83","Type":"ContainerDied","Data":"97d576c2e67916ea8c50f2c39e503b8ebe70640aa89466260df4631305725454"} Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.486215 4799 scope.go:117] "RemoveContainer" containerID="9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.490304 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" event={"ID":"5aac54cb-3650-498f-8981-c3e2d4c395a0","Type":"ContainerStarted","Data":"6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6"} Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.490445 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.511508 4799 scope.go:117] "RemoveContainer" containerID="4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.519066 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" podStartSLOduration=2.519048113 podStartE2EDuration="2.519048113s" podCreationTimestamp="2026-02-16 12:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:52.513072613 +0000 UTC m=+1458.106087947" watchObservedRunningTime="2026-02-16 12:55:52.519048113 +0000 UTC m=+1458.112063447" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.539278 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wdmt7"] Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.549810 4799 scope.go:117] "RemoveContainer" containerID="0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.555478 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wdmt7"] Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.577224 4799 scope.go:117] "RemoveContainer" containerID="9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb" Feb 16 12:55:52 crc kubenswrapper[4799]: E0216 12:55:52.578796 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb\": container with ID starting with 9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb not found: ID does not exist" containerID="9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.578848 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb"} err="failed to get container status \"9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb\": rpc error: code = NotFound desc = could not find container \"9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb\": container with ID starting with 9ac6ad72f8f2f8b6378ca6d278189aaa834e621718e51d52a11036239b98d4bb not found: ID does not exist" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.578878 4799 scope.go:117] "RemoveContainer" containerID="4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7" Feb 16 12:55:52 crc kubenswrapper[4799]: E0216 12:55:52.579199 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7\": container with ID starting with 4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7 not found: ID does not exist" containerID="4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.579228 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7"} err="failed to get container status \"4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7\": rpc error: code = NotFound desc = could not find container \"4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7\": container with ID starting with 4f6e4a4eaa35d577ea0c3798f27eeb3ffd0fc5e99f9793e280114b7d0075f4d7 not found: ID does not exist" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.579242 4799 scope.go:117] "RemoveContainer" containerID="0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461" Feb 16 12:55:52 crc kubenswrapper[4799]: E0216 12:55:52.579533 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461\": container with ID starting with 0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461 not found: ID does not exist" containerID="0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461" Feb 16 12:55:52 crc kubenswrapper[4799]: I0216 12:55:52.579573 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461"} err="failed to get container status \"0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461\": rpc error: code = NotFound desc = could not find container \"0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461\": container with ID starting with 0bc341daa530b5d103f05cbf93ddc7bcec86cb180b066b0500ff46360eb90461 not found: ID does not exist" Feb 16 12:55:53 crc kubenswrapper[4799]: I0216 12:55:53.160908 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" path="/var/lib/kubelet/pods/20fb551c-1259-4cf8-b8ec-9dcd1fc88c83/volumes" Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.682292 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.763105 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9fb648679-bxg6f"] Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.764387 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerName="dnsmasq-dns" containerID="cri-o://deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83" gracePeriod=10 Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.999037 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76846d67df-2cl9g"] Feb 16 12:56:00 crc kubenswrapper[4799]: E0216 12:56:00.999450 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="extract-utilities" Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.999470 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="extract-utilities" Feb 16 12:56:00 crc kubenswrapper[4799]: E0216 12:56:00.999486 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="registry-server" Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.999492 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="registry-server" Feb 16 12:56:00 crc kubenswrapper[4799]: E0216 12:56:00.999526 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="extract-content" Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.999532 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="extract-content" Feb 16 12:56:00 crc kubenswrapper[4799]: I0216 12:56:00.999700 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fb551c-1259-4cf8-b8ec-9dcd1fc88c83" containerName="registry-server" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.000731 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.016619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76846d67df-2cl9g"] Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.203349 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-dns-svc\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.204716 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-openstack-edpm-ipam\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.204830 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-dns-swift-storage-0\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.204874 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppp4j\" (UniqueName: \"kubernetes.io/projected/77997ea7-755d-40ed-94d6-baab5bd86a9b-kube-api-access-ppp4j\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.204944 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-config\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.204969 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-ovsdbserver-sb\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.204991 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-ovsdbserver-nb\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306536 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-config\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306605 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-ovsdbserver-sb\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306642 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-ovsdbserver-nb\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306713 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-dns-svc\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-openstack-edpm-ipam\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306890 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-dns-swift-storage-0\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.306925 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppp4j\" (UniqueName: \"kubernetes.io/projected/77997ea7-755d-40ed-94d6-baab5bd86a9b-kube-api-access-ppp4j\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.308176 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-config\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.311505 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-dns-svc\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.312096 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-ovsdbserver-nb\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.312884 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-openstack-edpm-ipam\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.313929 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-dns-swift-storage-0\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.318257 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/77997ea7-755d-40ed-94d6-baab5bd86a9b-ovsdbserver-sb\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.325271 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppp4j\" (UniqueName: \"kubernetes.io/projected/77997ea7-755d-40ed-94d6-baab5bd86a9b-kube-api-access-ppp4j\") pod \"dnsmasq-dns-76846d67df-2cl9g\" (UID: \"77997ea7-755d-40ed-94d6-baab5bd86a9b\") " pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.398420 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.510226 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-sb\") pod \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.510299 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-nb\") pod \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.510340 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/8e2b0d58-67b1-4f87-8e8f-819e56b29093-kube-api-access-6952s\") pod \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.510378 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-svc\") pod \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.510417 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-config\") pod \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.510640 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-swift-storage-0\") pod \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\" (UID: \"8e2b0d58-67b1-4f87-8e8f-819e56b29093\") " Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.529559 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2b0d58-67b1-4f87-8e8f-819e56b29093-kube-api-access-6952s" (OuterVolumeSpecName: "kube-api-access-6952s") pod "8e2b0d58-67b1-4f87-8e8f-819e56b29093" (UID: "8e2b0d58-67b1-4f87-8e8f-819e56b29093"). InnerVolumeSpecName "kube-api-access-6952s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.570340 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e2b0d58-67b1-4f87-8e8f-819e56b29093" (UID: "8e2b0d58-67b1-4f87-8e8f-819e56b29093"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.572100 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e2b0d58-67b1-4f87-8e8f-819e56b29093" (UID: "8e2b0d58-67b1-4f87-8e8f-819e56b29093"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.573967 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e2b0d58-67b1-4f87-8e8f-819e56b29093" (UID: "8e2b0d58-67b1-4f87-8e8f-819e56b29093"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.574964 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e2b0d58-67b1-4f87-8e8f-819e56b29093" (UID: "8e2b0d58-67b1-4f87-8e8f-819e56b29093"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.585677 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-config" (OuterVolumeSpecName: "config") pod "8e2b0d58-67b1-4f87-8e8f-819e56b29093" (UID: "8e2b0d58-67b1-4f87-8e8f-819e56b29093"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.587949 4799 generic.go:334] "Generic (PLEG): container finished" podID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerID="deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83" exitCode=0 Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.587997 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" event={"ID":"8e2b0d58-67b1-4f87-8e8f-819e56b29093","Type":"ContainerDied","Data":"deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83"} Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.588027 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" event={"ID":"8e2b0d58-67b1-4f87-8e8f-819e56b29093","Type":"ContainerDied","Data":"882bef35ec5ac7f1ce89972717f53222d4a08ab94972b296719aeed657aa86b8"} Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.588044 4799 scope.go:117] "RemoveContainer" containerID="deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.588069 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9fb648679-bxg6f" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.613860 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.613902 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.613912 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.613922 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/8e2b0d58-67b1-4f87-8e8f-819e56b29093-kube-api-access-6952s\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.613936 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.613945 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2b0d58-67b1-4f87-8e8f-819e56b29093-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.623462 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.623759 4799 scope.go:117] "RemoveContainer" containerID="6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.639760 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9fb648679-bxg6f"] Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.648326 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9fb648679-bxg6f"] Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.679429 4799 scope.go:117] "RemoveContainer" containerID="deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83" Feb 16 12:56:01 crc kubenswrapper[4799]: E0216 12:56:01.679921 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83\": container with ID starting with deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83 not found: ID does not exist" containerID="deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.679958 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83"} err="failed to get container status \"deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83\": rpc error: code = NotFound desc = could not find container \"deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83\": container with ID starting with deacc495ebabd4202bcd151eff4215f89af5a6bbacad4866915ef311f557dd83 not found: ID does not exist" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.679986 4799 scope.go:117] "RemoveContainer" containerID="6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f" Feb 16 12:56:01 crc kubenswrapper[4799]: E0216 12:56:01.680186 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f\": container with ID starting with 6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f not found: ID does not exist" containerID="6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f" Feb 16 12:56:01 crc kubenswrapper[4799]: I0216 12:56:01.680216 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f"} err="failed to get container status \"6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f\": rpc error: code = NotFound desc = could not find container \"6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f\": container with ID starting with 6184a71bacff925bfad00952679303424065e3bd8c42526e6c5cb8696633683f not found: ID does not exist" Feb 16 12:56:02 crc kubenswrapper[4799]: W0216 12:56:02.219678 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77997ea7_755d_40ed_94d6_baab5bd86a9b.slice/crio-51a7ee2d27c4c551645e0878d2dd628a3e17d3da2dd176782cbcbebd3b0821d2 WatchSource:0}: Error finding container 51a7ee2d27c4c551645e0878d2dd628a3e17d3da2dd176782cbcbebd3b0821d2: Status 404 returned error can't find the container with id 51a7ee2d27c4c551645e0878d2dd628a3e17d3da2dd176782cbcbebd3b0821d2 Feb 16 12:56:02 crc kubenswrapper[4799]: I0216 12:56:02.233750 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76846d67df-2cl9g"] Feb 16 12:56:02 crc kubenswrapper[4799]: I0216 12:56:02.607826 4799 generic.go:334] "Generic (PLEG): container finished" podID="77997ea7-755d-40ed-94d6-baab5bd86a9b" containerID="22792d4471064d712a566dacffcb7a3b4be1b400b95ebf452d3f389e23f51db1" exitCode=0 Feb 16 12:56:02 crc kubenswrapper[4799]: I0216 12:56:02.607907 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" event={"ID":"77997ea7-755d-40ed-94d6-baab5bd86a9b","Type":"ContainerDied","Data":"22792d4471064d712a566dacffcb7a3b4be1b400b95ebf452d3f389e23f51db1"} Feb 16 12:56:02 crc kubenswrapper[4799]: I0216 12:56:02.608506 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" event={"ID":"77997ea7-755d-40ed-94d6-baab5bd86a9b","Type":"ContainerStarted","Data":"51a7ee2d27c4c551645e0878d2dd628a3e17d3da2dd176782cbcbebd3b0821d2"} Feb 16 12:56:03 crc kubenswrapper[4799]: I0216 12:56:03.163949 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" path="/var/lib/kubelet/pods/8e2b0d58-67b1-4f87-8e8f-819e56b29093/volumes" Feb 16 12:56:03 crc kubenswrapper[4799]: I0216 12:56:03.620465 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" event={"ID":"77997ea7-755d-40ed-94d6-baab5bd86a9b","Type":"ContainerStarted","Data":"6145fcadad5dd252d69a06aa5f3e25b3666aaf3008eaa2f0da4fd2a164174efb"} Feb 16 12:56:03 crc kubenswrapper[4799]: I0216 12:56:03.620958 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:03 crc kubenswrapper[4799]: I0216 12:56:03.644730 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" podStartSLOduration=3.644709783 podStartE2EDuration="3.644709783s" podCreationTimestamp="2026-02-16 12:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:03.638029223 +0000 UTC m=+1469.231044557" watchObservedRunningTime="2026-02-16 12:56:03.644709783 +0000 UTC m=+1469.237725117" Feb 16 12:56:11 crc kubenswrapper[4799]: I0216 12:56:11.626110 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76846d67df-2cl9g" Feb 16 12:56:11 crc kubenswrapper[4799]: I0216 12:56:11.733835 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65ffbf6dcf-vl8nc"] Feb 16 12:56:11 crc kubenswrapper[4799]: I0216 12:56:11.734357 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerName="dnsmasq-dns" containerID="cri-o://6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6" gracePeriod=10 Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.227526 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.348830 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-svc\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.348969 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-swift-storage-0\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.349036 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-nb\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.349105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-config\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.349224 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-sb\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.349314 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-openstack-edpm-ipam\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.349337 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgnqr\" (UniqueName: \"kubernetes.io/projected/5aac54cb-3650-498f-8981-c3e2d4c395a0-kube-api-access-cgnqr\") pod \"5aac54cb-3650-498f-8981-c3e2d4c395a0\" (UID: \"5aac54cb-3650-498f-8981-c3e2d4c395a0\") " Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.357402 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aac54cb-3650-498f-8981-c3e2d4c395a0-kube-api-access-cgnqr" (OuterVolumeSpecName: "kube-api-access-cgnqr") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "kube-api-access-cgnqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.406560 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.408088 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.409049 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.410181 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.413249 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-config" (OuterVolumeSpecName: "config") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.417230 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5aac54cb-3650-498f-8981-c3e2d4c395a0" (UID: "5aac54cb-3650-498f-8981-c3e2d4c395a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452741 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgnqr\" (UniqueName: \"kubernetes.io/projected/5aac54cb-3650-498f-8981-c3e2d4c395a0-kube-api-access-cgnqr\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452785 4799 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452796 4799 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452806 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452815 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452823 4799 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.452831 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5aac54cb-3650-498f-8981-c3e2d4c395a0-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.749307 4799 generic.go:334] "Generic (PLEG): container finished" podID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerID="6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6" exitCode=0 Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.749372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" event={"ID":"5aac54cb-3650-498f-8981-c3e2d4c395a0","Type":"ContainerDied","Data":"6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6"} Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.749408 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" event={"ID":"5aac54cb-3650-498f-8981-c3e2d4c395a0","Type":"ContainerDied","Data":"f0f9ad6186e940864c6eb1480979146a29b121dadf96747abe3cb4d9da70d209"} Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.749431 4799 scope.go:117] "RemoveContainer" containerID="6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.749491 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65ffbf6dcf-vl8nc" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.769553 4799 scope.go:117] "RemoveContainer" containerID="f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.797424 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65ffbf6dcf-vl8nc"] Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.803566 4799 scope.go:117] "RemoveContainer" containerID="6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6" Feb 16 12:56:12 crc kubenswrapper[4799]: E0216 12:56:12.804033 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6\": container with ID starting with 6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6 not found: ID does not exist" containerID="6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.804067 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6"} err="failed to get container status \"6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6\": rpc error: code = NotFound desc = could not find container \"6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6\": container with ID starting with 6610e64abe98986f00e592a722a0a91a7b88cf928de3b2e1bd4dee13798007c6 not found: ID does not exist" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.804094 4799 scope.go:117] "RemoveContainer" containerID="f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75" Feb 16 12:56:12 crc kubenswrapper[4799]: E0216 12:56:12.804578 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75\": container with ID starting with f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75 not found: ID does not exist" containerID="f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.804645 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75"} err="failed to get container status \"f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75\": rpc error: code = NotFound desc = could not find container \"f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75\": container with ID starting with f710040fc232b892e8030915ab1a3551d3eefd1fca79b274f32aaa9aedffdb75 not found: ID does not exist" Feb 16 12:56:12 crc kubenswrapper[4799]: I0216 12:56:12.807166 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65ffbf6dcf-vl8nc"] Feb 16 12:56:13 crc kubenswrapper[4799]: I0216 12:56:13.176488 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" path="/var/lib/kubelet/pods/5aac54cb-3650-498f-8981-c3e2d4c395a0/volumes" Feb 16 12:56:16 crc kubenswrapper[4799]: I0216 12:56:16.788780 4799 generic.go:334] "Generic (PLEG): container finished" podID="52adb145-1b05-4515-a214-83731e3504b4" containerID="0054765a23dbcfe48f410fb53bb8fc167dfba992a356b8da7fea81d8c57a3802" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4799]: I0216 12:56:16.788861 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52adb145-1b05-4515-a214-83731e3504b4","Type":"ContainerDied","Data":"0054765a23dbcfe48f410fb53bb8fc167dfba992a356b8da7fea81d8c57a3802"} Feb 16 12:56:16 crc kubenswrapper[4799]: I0216 12:56:16.790814 4799 generic.go:334] "Generic (PLEG): container finished" podID="7a6be377-3c2d-46ab-a9b1-3faa91644a58" containerID="2e93efd60c857f4f7e5b37e614760cbfd8bc4b81d7307670383cf37ac42ebf13" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4799]: I0216 12:56:16.790844 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6be377-3c2d-46ab-a9b1-3faa91644a58","Type":"ContainerDied","Data":"2e93efd60c857f4f7e5b37e614760cbfd8bc4b81d7307670383cf37ac42ebf13"} Feb 16 12:56:17 crc kubenswrapper[4799]: I0216 12:56:17.816663 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6be377-3c2d-46ab-a9b1-3faa91644a58","Type":"ContainerStarted","Data":"6dd7f6392bb507809ea3aa2cb90ec9068cedd9b4414ae4a39b1806469d0f1ddb"} Feb 16 12:56:17 crc kubenswrapper[4799]: I0216 12:56:17.817506 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 12:56:17 crc kubenswrapper[4799]: I0216 12:56:17.820044 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52adb145-1b05-4515-a214-83731e3504b4","Type":"ContainerStarted","Data":"c3dd171b4ed1bc285c31057dc183dd57c20d6dd2ff068892bf25782771647200"} Feb 16 12:56:17 crc kubenswrapper[4799]: I0216 12:56:17.820585 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:56:17 crc kubenswrapper[4799]: I0216 12:56:17.849325 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.849300927 podStartE2EDuration="37.849300927s" podCreationTimestamp="2026-02-16 12:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:17.839893969 +0000 UTC m=+1483.432909303" watchObservedRunningTime="2026-02-16 12:56:17.849300927 +0000 UTC m=+1483.442316361" Feb 16 12:56:17 crc kubenswrapper[4799]: I0216 12:56:17.874428 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.874404901 podStartE2EDuration="37.874404901s" podCreationTimestamp="2026-02-16 12:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:17.866033333 +0000 UTC m=+1483.459048667" watchObservedRunningTime="2026-02-16 12:56:17.874404901 +0000 UTC m=+1483.467420235" Feb 16 12:56:21 crc kubenswrapper[4799]: I0216 12:56:21.793367 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:56:21 crc kubenswrapper[4799]: I0216 12:56:21.793729 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:56:21 crc kubenswrapper[4799]: I0216 12:56:21.793785 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:56:21 crc kubenswrapper[4799]: I0216 12:56:21.794459 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1050488caaf418ecf3c571c9e2581e4f4da347fd70264d129d94529e08845412"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:56:21 crc kubenswrapper[4799]: I0216 12:56:21.794511 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://1050488caaf418ecf3c571c9e2581e4f4da347fd70264d129d94529e08845412" gracePeriod=600 Feb 16 12:56:22 crc kubenswrapper[4799]: I0216 12:56:22.879182 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="1050488caaf418ecf3c571c9e2581e4f4da347fd70264d129d94529e08845412" exitCode=0 Feb 16 12:56:22 crc kubenswrapper[4799]: I0216 12:56:22.879255 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"1050488caaf418ecf3c571c9e2581e4f4da347fd70264d129d94529e08845412"} Feb 16 12:56:22 crc kubenswrapper[4799]: I0216 12:56:22.879776 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15"} Feb 16 12:56:22 crc kubenswrapper[4799]: I0216 12:56:22.879803 4799 scope.go:117] "RemoveContainer" containerID="02716d4728e3df68a334a717adc33b15d61e7b7d0fc4e582388c3db1323e8e1a" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.249456 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8"] Feb 16 12:56:29 crc kubenswrapper[4799]: E0216 12:56:29.250305 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerName="dnsmasq-dns" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.250319 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerName="dnsmasq-dns" Feb 16 12:56:29 crc kubenswrapper[4799]: E0216 12:56:29.250333 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerName="init" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.250340 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerName="init" Feb 16 12:56:29 crc kubenswrapper[4799]: E0216 12:56:29.250361 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerName="init" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.250367 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerName="init" Feb 16 12:56:29 crc kubenswrapper[4799]: E0216 12:56:29.250403 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerName="dnsmasq-dns" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.250409 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerName="dnsmasq-dns" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.250573 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aac54cb-3650-498f-8981-c3e2d4c395a0" containerName="dnsmasq-dns" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.250595 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2b0d58-67b1-4f87-8e8f-819e56b29093" containerName="dnsmasq-dns" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.251275 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.253791 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.254229 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.255342 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.255922 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.264367 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8"] Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.406436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrk2c\" (UniqueName: \"kubernetes.io/projected/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-kube-api-access-rrk2c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.406744 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.407148 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.407209 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.509670 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrk2c\" (UniqueName: \"kubernetes.io/projected/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-kube-api-access-rrk2c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.509763 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.509840 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.509859 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.515678 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.516619 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.517549 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.542454 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrk2c\" (UniqueName: \"kubernetes.io/projected/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-kube-api-access-rrk2c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:29 crc kubenswrapper[4799]: I0216 12:56:29.617658 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:30 crc kubenswrapper[4799]: I0216 12:56:30.231638 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8"] Feb 16 12:56:30 crc kubenswrapper[4799]: I0216 12:56:30.970817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" event={"ID":"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb","Type":"ContainerStarted","Data":"798abbb380cb99df07ae8a04418328e8d79caf75c0da4b04393e8c7f7e39c33f"} Feb 16 12:56:31 crc kubenswrapper[4799]: I0216 12:56:31.049491 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 12:56:31 crc kubenswrapper[4799]: I0216 12:56:31.187807 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 12:56:38 crc kubenswrapper[4799]: I0216 12:56:38.227754 4799 scope.go:117] "RemoveContainer" containerID="6cd0469fed761d02e54904414d4e2e7a73772546160cbcf48fea78999d533374" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.355515 4799 scope.go:117] "RemoveContainer" containerID="992403e79b53313dd3821cf0f592b6e35116ab2864fdb3629455fa8dcc9e7e93" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.445482 4799 scope.go:117] "RemoveContainer" containerID="6f518bec20066a7212091392726168c088d4d5961c302aaa4d6e508aebb28360" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.449302 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.578249 4799 scope.go:117] "RemoveContainer" containerID="a934223813f802c99ad4347046828761b2fd1fa3074d997cba0b6034cc4c14c8" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.623321 4799 scope.go:117] "RemoveContainer" containerID="c8f6d15b16d49252fe7dfceef2ed17ed454d46659fc368ae56d30c92e2bb5889" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.652544 4799 scope.go:117] "RemoveContainer" containerID="6132e87ef3b709ae85adf98fb009b758f5bba35dd5dbb36931fac3276a325e8b" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.682888 4799 scope.go:117] "RemoveContainer" containerID="2014761acff9b23d77794ad37caeba7c52f1bab979939ed97275d0e938a0da8b" Feb 16 12:56:40 crc kubenswrapper[4799]: I0216 12:56:40.708586 4799 scope.go:117] "RemoveContainer" containerID="5e12f56f901c2378eea66964f27f2075afc1c1fd980d86e35efff241a2c443b5" Feb 16 12:56:41 crc kubenswrapper[4799]: I0216 12:56:41.086058 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" event={"ID":"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb","Type":"ContainerStarted","Data":"e4eefd088ae7305367a9d3530c9318e239849b88f832522d832451a51b1b8ec2"} Feb 16 12:56:41 crc kubenswrapper[4799]: I0216 12:56:41.112659 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" podStartSLOduration=1.901664694 podStartE2EDuration="12.112641511s" podCreationTimestamp="2026-02-16 12:56:29 +0000 UTC" firstStartedPulling="2026-02-16 12:56:30.235335019 +0000 UTC m=+1495.828350353" lastFinishedPulling="2026-02-16 12:56:40.446311836 +0000 UTC m=+1506.039327170" observedRunningTime="2026-02-16 12:56:41.101893585 +0000 UTC m=+1506.694909119" watchObservedRunningTime="2026-02-16 12:56:41.112641511 +0000 UTC m=+1506.705656845" Feb 16 12:56:42 crc kubenswrapper[4799]: I0216 12:56:42.842292 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jmmft"] Feb 16 12:56:42 crc kubenswrapper[4799]: I0216 12:56:42.846015 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:42 crc kubenswrapper[4799]: I0216 12:56:42.851568 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmmft"] Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.032734 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-catalog-content\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.032801 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-utilities\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.032852 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8zp\" (UniqueName: \"kubernetes.io/projected/fb6abd32-e17a-4892-861c-112eeedc29be-kube-api-access-dq8zp\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.134808 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-catalog-content\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.134869 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-utilities\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.134924 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8zp\" (UniqueName: \"kubernetes.io/projected/fb6abd32-e17a-4892-861c-112eeedc29be-kube-api-access-dq8zp\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.135983 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-catalog-content\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.136006 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-utilities\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.190770 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8zp\" (UniqueName: \"kubernetes.io/projected/fb6abd32-e17a-4892-861c-112eeedc29be-kube-api-access-dq8zp\") pod \"redhat-marketplace-jmmft\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:43 crc kubenswrapper[4799]: I0216 12:56:43.470964 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:44 crc kubenswrapper[4799]: W0216 12:56:44.028426 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6abd32_e17a_4892_861c_112eeedc29be.slice/crio-6d520f7fffa202eb94a931da5f3f88daf31c778a257fd7d3e800a239e7d94af1 WatchSource:0}: Error finding container 6d520f7fffa202eb94a931da5f3f88daf31c778a257fd7d3e800a239e7d94af1: Status 404 returned error can't find the container with id 6d520f7fffa202eb94a931da5f3f88daf31c778a257fd7d3e800a239e7d94af1 Feb 16 12:56:44 crc kubenswrapper[4799]: I0216 12:56:44.029275 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmmft"] Feb 16 12:56:44 crc kubenswrapper[4799]: I0216 12:56:44.116142 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerStarted","Data":"6d520f7fffa202eb94a931da5f3f88daf31c778a257fd7d3e800a239e7d94af1"} Feb 16 12:56:45 crc kubenswrapper[4799]: I0216 12:56:45.130098 4799 generic.go:334] "Generic (PLEG): container finished" podID="fb6abd32-e17a-4892-861c-112eeedc29be" containerID="56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389" exitCode=0 Feb 16 12:56:45 crc kubenswrapper[4799]: I0216 12:56:45.130202 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerDied","Data":"56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389"} Feb 16 12:56:46 crc kubenswrapper[4799]: I0216 12:56:46.141668 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerStarted","Data":"8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9"} Feb 16 12:56:47 crc kubenswrapper[4799]: I0216 12:56:47.154552 4799 generic.go:334] "Generic (PLEG): container finished" podID="fb6abd32-e17a-4892-861c-112eeedc29be" containerID="8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9" exitCode=0 Feb 16 12:56:47 crc kubenswrapper[4799]: I0216 12:56:47.161278 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerDied","Data":"8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9"} Feb 16 12:56:47 crc kubenswrapper[4799]: I0216 12:56:47.161321 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerStarted","Data":"f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3"} Feb 16 12:56:47 crc kubenswrapper[4799]: I0216 12:56:47.183478 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jmmft" podStartSLOduration=3.543718589 podStartE2EDuration="5.18345806s" podCreationTimestamp="2026-02-16 12:56:42 +0000 UTC" firstStartedPulling="2026-02-16 12:56:45.133065401 +0000 UTC m=+1510.726080735" lastFinishedPulling="2026-02-16 12:56:46.772804872 +0000 UTC m=+1512.365820206" observedRunningTime="2026-02-16 12:56:47.176543003 +0000 UTC m=+1512.769558337" watchObservedRunningTime="2026-02-16 12:56:47.18345806 +0000 UTC m=+1512.776473394" Feb 16 12:56:53 crc kubenswrapper[4799]: I0216 12:56:53.270875 4799 generic.go:334] "Generic (PLEG): container finished" podID="7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" containerID="e4eefd088ae7305367a9d3530c9318e239849b88f832522d832451a51b1b8ec2" exitCode=0 Feb 16 12:56:53 crc kubenswrapper[4799]: I0216 12:56:53.270930 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" event={"ID":"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb","Type":"ContainerDied","Data":"e4eefd088ae7305367a9d3530c9318e239849b88f832522d832451a51b1b8ec2"} Feb 16 12:56:53 crc kubenswrapper[4799]: I0216 12:56:53.471322 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:53 crc kubenswrapper[4799]: I0216 12:56:53.471462 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:53 crc kubenswrapper[4799]: I0216 12:56:53.521440 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.347143 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.413266 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmmft"] Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.752320 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.899758 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-inventory\") pod \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.900286 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-repo-setup-combined-ca-bundle\") pod \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.900326 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-ssh-key-openstack-edpm-ipam\") pod \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.900424 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrk2c\" (UniqueName: \"kubernetes.io/projected/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-kube-api-access-rrk2c\") pod \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\" (UID: \"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb\") " Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.906158 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" (UID: "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.909637 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-kube-api-access-rrk2c" (OuterVolumeSpecName: "kube-api-access-rrk2c") pod "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" (UID: "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb"). InnerVolumeSpecName "kube-api-access-rrk2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.937567 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" (UID: "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:54 crc kubenswrapper[4799]: I0216 12:56:54.943831 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-inventory" (OuterVolumeSpecName: "inventory") pod "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" (UID: "7cc337e4-c7f3-47cd-bd87-4d6230d8efcb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.003723 4799 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.003770 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.003784 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrk2c\" (UniqueName: \"kubernetes.io/projected/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-kube-api-access-rrk2c\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.003798 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc337e4-c7f3-47cd-bd87-4d6230d8efcb-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.306020 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.306111 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8" event={"ID":"7cc337e4-c7f3-47cd-bd87-4d6230d8efcb","Type":"ContainerDied","Data":"798abbb380cb99df07ae8a04418328e8d79caf75c0da4b04393e8c7f7e39c33f"} Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.306170 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798abbb380cb99df07ae8a04418328e8d79caf75c0da4b04393e8c7f7e39c33f" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.380593 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d"] Feb 16 12:56:55 crc kubenswrapper[4799]: E0216 12:56:55.381156 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.381176 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.381440 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc337e4-c7f3-47cd-bd87-4d6230d8efcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.382289 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.388240 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.388264 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.388581 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.391319 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.392794 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d"] Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.429096 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rck8\" (UniqueName: \"kubernetes.io/projected/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-kube-api-access-4rck8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.429297 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.429330 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.533336 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.533423 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.533563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rck8\" (UniqueName: \"kubernetes.io/projected/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-kube-api-access-4rck8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.538627 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.541076 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.552739 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rck8\" (UniqueName: \"kubernetes.io/projected/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-kube-api-access-4rck8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8xh9d\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:55 crc kubenswrapper[4799]: I0216 12:56:55.712393 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.240393 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d"] Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.321082 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" event={"ID":"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5","Type":"ContainerStarted","Data":"c9db9fdd54bb9ded3e9d957623ead0cd12bc63ade7deaf9326e8f75540ea664e"} Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.321267 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jmmft" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="registry-server" containerID="cri-o://f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3" gracePeriod=2 Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.790426 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.965289 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-catalog-content\") pod \"fb6abd32-e17a-4892-861c-112eeedc29be\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.965460 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8zp\" (UniqueName: \"kubernetes.io/projected/fb6abd32-e17a-4892-861c-112eeedc29be-kube-api-access-dq8zp\") pod \"fb6abd32-e17a-4892-861c-112eeedc29be\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.965769 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-utilities\") pod \"fb6abd32-e17a-4892-861c-112eeedc29be\" (UID: \"fb6abd32-e17a-4892-861c-112eeedc29be\") " Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.966720 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-utilities" (OuterVolumeSpecName: "utilities") pod "fb6abd32-e17a-4892-861c-112eeedc29be" (UID: "fb6abd32-e17a-4892-861c-112eeedc29be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4799]: I0216 12:56:56.974533 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6abd32-e17a-4892-861c-112eeedc29be-kube-api-access-dq8zp" (OuterVolumeSpecName: "kube-api-access-dq8zp") pod "fb6abd32-e17a-4892-861c-112eeedc29be" (UID: "fb6abd32-e17a-4892-861c-112eeedc29be"). InnerVolumeSpecName "kube-api-access-dq8zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.011628 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb6abd32-e17a-4892-861c-112eeedc29be" (UID: "fb6abd32-e17a-4892-861c-112eeedc29be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.069249 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.069282 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6abd32-e17a-4892-861c-112eeedc29be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.069293 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq8zp\" (UniqueName: \"kubernetes.io/projected/fb6abd32-e17a-4892-861c-112eeedc29be-kube-api-access-dq8zp\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.335394 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" event={"ID":"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5","Type":"ContainerStarted","Data":"fa0439eafb31fde53705ae224989acd95e93d49a08038ee89f1bcf5a87fbae21"} Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.339110 4799 generic.go:334] "Generic (PLEG): container finished" podID="fb6abd32-e17a-4892-861c-112eeedc29be" containerID="f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3" exitCode=0 Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.339187 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmmft" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.339355 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerDied","Data":"f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3"} Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.339466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmmft" event={"ID":"fb6abd32-e17a-4892-861c-112eeedc29be","Type":"ContainerDied","Data":"6d520f7fffa202eb94a931da5f3f88daf31c778a257fd7d3e800a239e7d94af1"} Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.339526 4799 scope.go:117] "RemoveContainer" containerID="f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.366218 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" podStartSLOduration=1.607680115 podStartE2EDuration="2.366194324s" podCreationTimestamp="2026-02-16 12:56:55 +0000 UTC" firstStartedPulling="2026-02-16 12:56:56.250397275 +0000 UTC m=+1521.843412609" lastFinishedPulling="2026-02-16 12:56:57.008911484 +0000 UTC m=+1522.601926818" observedRunningTime="2026-02-16 12:56:57.355351545 +0000 UTC m=+1522.948366879" watchObservedRunningTime="2026-02-16 12:56:57.366194324 +0000 UTC m=+1522.959209678" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.389164 4799 scope.go:117] "RemoveContainer" containerID="8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.390044 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmmft"] Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.400342 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmmft"] Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.412583 4799 scope.go:117] "RemoveContainer" containerID="56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.445156 4799 scope.go:117] "RemoveContainer" containerID="f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3" Feb 16 12:56:57 crc kubenswrapper[4799]: E0216 12:56:57.445741 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3\": container with ID starting with f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3 not found: ID does not exist" containerID="f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.445890 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3"} err="failed to get container status \"f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3\": rpc error: code = NotFound desc = could not find container \"f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3\": container with ID starting with f32dfb2b298e4faea7d831f1433546c4ba1c41ebf23e34c8c0938b136d3a0eb3 not found: ID does not exist" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.445998 4799 scope.go:117] "RemoveContainer" containerID="8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9" Feb 16 12:56:57 crc kubenswrapper[4799]: E0216 12:56:57.446332 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9\": container with ID starting with 8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9 not found: ID does not exist" containerID="8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.446362 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9"} err="failed to get container status \"8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9\": rpc error: code = NotFound desc = could not find container \"8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9\": container with ID starting with 8795c61a514dccc77243e946f72fec2ddbb28c91b44e2a37258c87790d41fed9 not found: ID does not exist" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.446381 4799 scope.go:117] "RemoveContainer" containerID="56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389" Feb 16 12:56:57 crc kubenswrapper[4799]: E0216 12:56:57.446768 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389\": container with ID starting with 56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389 not found: ID does not exist" containerID="56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389" Feb 16 12:56:57 crc kubenswrapper[4799]: I0216 12:56:57.446859 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389"} err="failed to get container status \"56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389\": rpc error: code = NotFound desc = could not find container \"56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389\": container with ID starting with 56df3b0030dbc557370f9caf66a97aba6dfd2167a3069cf71725d9a4991f4389 not found: ID does not exist" Feb 16 12:56:59 crc kubenswrapper[4799]: I0216 12:56:59.162004 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" path="/var/lib/kubelet/pods/fb6abd32-e17a-4892-861c-112eeedc29be/volumes" Feb 16 12:57:00 crc kubenswrapper[4799]: I0216 12:57:00.368835 4799 generic.go:334] "Generic (PLEG): container finished" podID="4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" containerID="fa0439eafb31fde53705ae224989acd95e93d49a08038ee89f1bcf5a87fbae21" exitCode=0 Feb 16 12:57:00 crc kubenswrapper[4799]: I0216 12:57:00.368887 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" event={"ID":"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5","Type":"ContainerDied","Data":"fa0439eafb31fde53705ae224989acd95e93d49a08038ee89f1bcf5a87fbae21"} Feb 16 12:57:01 crc kubenswrapper[4799]: I0216 12:57:01.853222 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.055392 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-inventory\") pod \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.055533 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-ssh-key-openstack-edpm-ipam\") pod \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.055644 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rck8\" (UniqueName: \"kubernetes.io/projected/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-kube-api-access-4rck8\") pod \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\" (UID: \"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5\") " Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.067069 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-kube-api-access-4rck8" (OuterVolumeSpecName: "kube-api-access-4rck8") pod "4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" (UID: "4e06d186-e0e8-4b62-8e6a-087d37dbd8c5"). InnerVolumeSpecName "kube-api-access-4rck8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.089714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-inventory" (OuterVolumeSpecName: "inventory") pod "4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" (UID: "4e06d186-e0e8-4b62-8e6a-087d37dbd8c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.090300 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" (UID: "4e06d186-e0e8-4b62-8e6a-087d37dbd8c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.161185 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.161221 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rck8\" (UniqueName: \"kubernetes.io/projected/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-kube-api-access-4rck8\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.161235 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e06d186-e0e8-4b62-8e6a-087d37dbd8c5-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.389170 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" event={"ID":"4e06d186-e0e8-4b62-8e6a-087d37dbd8c5","Type":"ContainerDied","Data":"c9db9fdd54bb9ded3e9d957623ead0cd12bc63ade7deaf9326e8f75540ea664e"} Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.389205 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8xh9d" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.389216 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9db9fdd54bb9ded3e9d957623ead0cd12bc63ade7deaf9326e8f75540ea664e" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.535501 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs"] Feb 16 12:57:02 crc kubenswrapper[4799]: E0216 12:57:02.537535 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="extract-content" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.537612 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="extract-content" Feb 16 12:57:02 crc kubenswrapper[4799]: E0216 12:57:02.537644 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="extract-utilities" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.537652 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="extract-utilities" Feb 16 12:57:02 crc kubenswrapper[4799]: E0216 12:57:02.537678 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.537687 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 12:57:02 crc kubenswrapper[4799]: E0216 12:57:02.537793 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="registry-server" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.537802 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="registry-server" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.538502 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e06d186-e0e8-4b62-8e6a-087d37dbd8c5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.538561 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6abd32-e17a-4892-861c-112eeedc29be" containerName="registry-server" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.562959 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.570917 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.571211 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.571369 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.571814 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.611190 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs"] Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.678104 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9dq\" (UniqueName: \"kubernetes.io/projected/4ea66d5c-7325-440d-816c-c02db1d1bf90-kube-api-access-gp9dq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.678226 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.678370 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.678453 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.780504 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.780780 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.780911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp9dq\" (UniqueName: \"kubernetes.io/projected/4ea66d5c-7325-440d-816c-c02db1d1bf90-kube-api-access-gp9dq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.781024 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.785452 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.785731 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.786687 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.801166 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp9dq\" (UniqueName: \"kubernetes.io/projected/4ea66d5c-7325-440d-816c-c02db1d1bf90-kube-api-access-gp9dq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:02 crc kubenswrapper[4799]: I0216 12:57:02.888104 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 12:57:03 crc kubenswrapper[4799]: I0216 12:57:03.731799 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs"] Feb 16 12:57:04 crc kubenswrapper[4799]: I0216 12:57:04.633341 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" event={"ID":"4ea66d5c-7325-440d-816c-c02db1d1bf90","Type":"ContainerStarted","Data":"bf3ab7ca03b6be2b6e1ed22f347f6baa5991768af565f25995bf98140e7a34b8"} Feb 16 12:57:05 crc kubenswrapper[4799]: I0216 12:57:05.643469 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" event={"ID":"4ea66d5c-7325-440d-816c-c02db1d1bf90","Type":"ContainerStarted","Data":"22b03bc9d87e5752dbb29e2d0785cdca4f384fbe07b9ca4d52b3d8976babf048"} Feb 16 12:57:05 crc kubenswrapper[4799]: I0216 12:57:05.674633 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" podStartSLOduration=3.023958288 podStartE2EDuration="3.674611237s" podCreationTimestamp="2026-02-16 12:57:02 +0000 UTC" firstStartedPulling="2026-02-16 12:57:03.736887335 +0000 UTC m=+1529.329902669" lastFinishedPulling="2026-02-16 12:57:04.387540274 +0000 UTC m=+1529.980555618" observedRunningTime="2026-02-16 12:57:05.664196351 +0000 UTC m=+1531.257211695" watchObservedRunningTime="2026-02-16 12:57:05.674611237 +0000 UTC m=+1531.267626571" Feb 16 12:57:40 crc kubenswrapper[4799]: I0216 12:57:40.969327 4799 scope.go:117] "RemoveContainer" containerID="0c75451f10a68da626c898a9ba324f74b9efc33a0a34ef25960ca38ff2ae70f3" Feb 16 12:57:41 crc kubenswrapper[4799]: I0216 12:57:41.017826 4799 scope.go:117] "RemoveContainer" containerID="788ddcab4cad9150958fd9efdf483fe720eeedd1fbc8f3e163e71118b3abd8f6" Feb 16 12:58:51 crc kubenswrapper[4799]: I0216 12:58:51.792649 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:58:51 crc kubenswrapper[4799]: I0216 12:58:51.793113 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:59:21 crc kubenswrapper[4799]: I0216 12:59:21.793433 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:59:21 crc kubenswrapper[4799]: I0216 12:59:21.794013 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:59:51 crc kubenswrapper[4799]: I0216 12:59:51.793566 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:59:51 crc kubenswrapper[4799]: I0216 12:59:51.794347 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:59:51 crc kubenswrapper[4799]: I0216 12:59:51.794442 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 12:59:51 crc kubenswrapper[4799]: I0216 12:59:51.795395 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:59:51 crc kubenswrapper[4799]: I0216 12:59:51.795487 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" gracePeriod=600 Feb 16 12:59:51 crc kubenswrapper[4799]: E0216 12:59:51.916449 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 12:59:52 crc kubenswrapper[4799]: I0216 12:59:52.599947 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" exitCode=0 Feb 16 12:59:52 crc kubenswrapper[4799]: I0216 12:59:52.600002 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15"} Feb 16 12:59:52 crc kubenswrapper[4799]: I0216 12:59:52.600051 4799 scope.go:117] "RemoveContainer" containerID="1050488caaf418ecf3c571c9e2581e4f4da347fd70264d129d94529e08845412" Feb 16 12:59:52 crc kubenswrapper[4799]: I0216 12:59:52.600927 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 12:59:52 crc kubenswrapper[4799]: E0216 12:59:52.601381 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 12:59:53 crc kubenswrapper[4799]: I0216 12:59:53.040651 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-b692-account-create-update-mmrg5"] Feb 16 12:59:53 crc kubenswrapper[4799]: I0216 12:59:53.053308 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-2hqd8"] Feb 16 12:59:53 crc kubenswrapper[4799]: I0216 12:59:53.064861 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-b692-account-create-update-mmrg5"] Feb 16 12:59:53 crc kubenswrapper[4799]: I0216 12:59:53.074304 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-2hqd8"] Feb 16 12:59:53 crc kubenswrapper[4799]: I0216 12:59:53.159973 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5733f514-65f3-49c8-a40b-586eae0eb996" path="/var/lib/kubelet/pods/5733f514-65f3-49c8-a40b-586eae0eb996/volumes" Feb 16 12:59:53 crc kubenswrapper[4799]: I0216 12:59:53.161792 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad67827-6ed7-48ce-842d-413a84f9171d" path="/var/lib/kubelet/pods/6ad67827-6ed7-48ce-842d-413a84f9171d/volumes" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.151921 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k"] Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.153770 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.156341 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.157064 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.162260 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k"] Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.310157 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00a4eb39-b079-4053-a166-8ca7a6987683-secret-volume\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.311556 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j779r\" (UniqueName: \"kubernetes.io/projected/00a4eb39-b079-4053-a166-8ca7a6987683-kube-api-access-j779r\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.311598 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a4eb39-b079-4053-a166-8ca7a6987683-config-volume\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.413854 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j779r\" (UniqueName: \"kubernetes.io/projected/00a4eb39-b079-4053-a166-8ca7a6987683-kube-api-access-j779r\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.413935 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a4eb39-b079-4053-a166-8ca7a6987683-config-volume\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.414047 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00a4eb39-b079-4053-a166-8ca7a6987683-secret-volume\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.415954 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a4eb39-b079-4053-a166-8ca7a6987683-config-volume\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.425735 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00a4eb39-b079-4053-a166-8ca7a6987683-secret-volume\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.435363 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j779r\" (UniqueName: \"kubernetes.io/projected/00a4eb39-b079-4053-a166-8ca7a6987683-kube-api-access-j779r\") pod \"collect-profiles-29520780-kvx5k\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.484695 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:00 crc kubenswrapper[4799]: I0216 13:00:00.948675 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k"] Feb 16 13:00:01 crc kubenswrapper[4799]: I0216 13:00:01.755489 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" event={"ID":"00a4eb39-b079-4053-a166-8ca7a6987683","Type":"ContainerStarted","Data":"9151f31eb1d77ad483c080726f7660732c050b0f2415d8fb7bd4bb1396a594dd"} Feb 16 13:00:02 crc kubenswrapper[4799]: I0216 13:00:02.088882 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-snrvh"] Feb 16 13:00:02 crc kubenswrapper[4799]: I0216 13:00:02.099471 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-snrvh"] Feb 16 13:00:02 crc kubenswrapper[4799]: I0216 13:00:02.770478 4799 generic.go:334] "Generic (PLEG): container finished" podID="00a4eb39-b079-4053-a166-8ca7a6987683" containerID="c0a0c2cbea84d45457d4fb818ba90bddc315c588164d4d0c27ce8da7bc62ff6b" exitCode=0 Feb 16 13:00:02 crc kubenswrapper[4799]: I0216 13:00:02.770600 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" event={"ID":"00a4eb39-b079-4053-a166-8ca7a6987683","Type":"ContainerDied","Data":"c0a0c2cbea84d45457d4fb818ba90bddc315c588164d4d0c27ce8da7bc62ff6b"} Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.030259 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-df4c-account-create-update-qbwnq"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.042323 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bq6vr"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.070337 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-82de-account-create-update-t8zkv"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.070408 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ms4pb"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.077951 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bq6vr"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.092321 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-df4c-account-create-update-qbwnq"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.095667 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d05a-account-create-update-7zctx"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.115000 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ms4pb"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.115055 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-82de-account-create-update-t8zkv"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.122172 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d05a-account-create-update-7zctx"] Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.388600 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140b9f7a-d350-46a3-bd9d-83180f2d839b" path="/var/lib/kubelet/pods/140b9f7a-d350-46a3-bd9d-83180f2d839b/volumes" Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.389267 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a1b93a-7a9e-49a5-8264-a9afb09de45d" path="/var/lib/kubelet/pods/26a1b93a-7a9e-49a5-8264-a9afb09de45d/volumes" Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.390348 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0089fc-1608-4b31-9219-bff2d2cbed59" path="/var/lib/kubelet/pods/4d0089fc-1608-4b31-9219-bff2d2cbed59/volumes" Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.391598 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b4b8f9-9fae-4b55-8906-8bc269dc9f19" path="/var/lib/kubelet/pods/60b4b8f9-9fae-4b55-8906-8bc269dc9f19/volumes" Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.392609 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7801634e-581b-49f9-b90a-a1cd47b1d2fb" path="/var/lib/kubelet/pods/7801634e-581b-49f9-b90a-a1cd47b1d2fb/volumes" Feb 16 13:00:03 crc kubenswrapper[4799]: I0216 13:00:03.393157 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90542f3-200f-4070-b73a-3b8bfd004fdc" path="/var/lib/kubelet/pods/c90542f3-200f-4070-b73a-3b8bfd004fdc/volumes" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.373504 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.499334 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00a4eb39-b079-4053-a166-8ca7a6987683-secret-volume\") pod \"00a4eb39-b079-4053-a166-8ca7a6987683\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.499585 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a4eb39-b079-4053-a166-8ca7a6987683-config-volume\") pod \"00a4eb39-b079-4053-a166-8ca7a6987683\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.499703 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j779r\" (UniqueName: \"kubernetes.io/projected/00a4eb39-b079-4053-a166-8ca7a6987683-kube-api-access-j779r\") pod \"00a4eb39-b079-4053-a166-8ca7a6987683\" (UID: \"00a4eb39-b079-4053-a166-8ca7a6987683\") " Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.500472 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a4eb39-b079-4053-a166-8ca7a6987683-config-volume" (OuterVolumeSpecName: "config-volume") pod "00a4eb39-b079-4053-a166-8ca7a6987683" (UID: "00a4eb39-b079-4053-a166-8ca7a6987683"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.506805 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a4eb39-b079-4053-a166-8ca7a6987683-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "00a4eb39-b079-4053-a166-8ca7a6987683" (UID: "00a4eb39-b079-4053-a166-8ca7a6987683"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.506850 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a4eb39-b079-4053-a166-8ca7a6987683-kube-api-access-j779r" (OuterVolumeSpecName: "kube-api-access-j779r") pod "00a4eb39-b079-4053-a166-8ca7a6987683" (UID: "00a4eb39-b079-4053-a166-8ca7a6987683"). InnerVolumeSpecName "kube-api-access-j779r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.601882 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j779r\" (UniqueName: \"kubernetes.io/projected/00a4eb39-b079-4053-a166-8ca7a6987683-kube-api-access-j779r\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.601932 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00a4eb39-b079-4053-a166-8ca7a6987683-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.601941 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a4eb39-b079-4053-a166-8ca7a6987683-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.801078 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" event={"ID":"00a4eb39-b079-4053-a166-8ca7a6987683","Type":"ContainerDied","Data":"9151f31eb1d77ad483c080726f7660732c050b0f2415d8fb7bd4bb1396a594dd"} Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.801479 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9151f31eb1d77ad483c080726f7660732c050b0f2415d8fb7bd4bb1396a594dd" Feb 16 13:00:04 crc kubenswrapper[4799]: I0216 13:00:04.801167 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k" Feb 16 13:00:06 crc kubenswrapper[4799]: I0216 13:00:06.148809 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:00:06 crc kubenswrapper[4799]: E0216 13:00:06.149493 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:00:12 crc kubenswrapper[4799]: I0216 13:00:12.874856 4799 generic.go:334] "Generic (PLEG): container finished" podID="4ea66d5c-7325-440d-816c-c02db1d1bf90" containerID="22b03bc9d87e5752dbb29e2d0785cdca4f384fbe07b9ca4d52b3d8976babf048" exitCode=0 Feb 16 13:00:12 crc kubenswrapper[4799]: I0216 13:00:12.875357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" event={"ID":"4ea66d5c-7325-440d-816c-c02db1d1bf90","Type":"ContainerDied","Data":"22b03bc9d87e5752dbb29e2d0785cdca4f384fbe07b9ca4d52b3d8976babf048"} Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.169869 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.247783 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-bootstrap-combined-ca-bundle\") pod \"4ea66d5c-7325-440d-816c-c02db1d1bf90\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.251335 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4ea66d5c-7325-440d-816c-c02db1d1bf90" (UID: "4ea66d5c-7325-440d-816c-c02db1d1bf90"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.250797 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp9dq\" (UniqueName: \"kubernetes.io/projected/4ea66d5c-7325-440d-816c-c02db1d1bf90-kube-api-access-gp9dq\") pod \"4ea66d5c-7325-440d-816c-c02db1d1bf90\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.252214 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-inventory\") pod \"4ea66d5c-7325-440d-816c-c02db1d1bf90\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.252292 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-ssh-key-openstack-edpm-ipam\") pod \"4ea66d5c-7325-440d-816c-c02db1d1bf90\" (UID: \"4ea66d5c-7325-440d-816c-c02db1d1bf90\") " Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.253785 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea66d5c-7325-440d-816c-c02db1d1bf90-kube-api-access-gp9dq" (OuterVolumeSpecName: "kube-api-access-gp9dq") pod "4ea66d5c-7325-440d-816c-c02db1d1bf90" (UID: "4ea66d5c-7325-440d-816c-c02db1d1bf90"). InnerVolumeSpecName "kube-api-access-gp9dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.254531 4799 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.254557 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp9dq\" (UniqueName: \"kubernetes.io/projected/4ea66d5c-7325-440d-816c-c02db1d1bf90-kube-api-access-gp9dq\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.280080 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ea66d5c-7325-440d-816c-c02db1d1bf90" (UID: "4ea66d5c-7325-440d-816c-c02db1d1bf90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.280586 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-inventory" (OuterVolumeSpecName: "inventory") pod "4ea66d5c-7325-440d-816c-c02db1d1bf90" (UID: "4ea66d5c-7325-440d-816c-c02db1d1bf90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.356757 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.356794 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea66d5c-7325-440d-816c-c02db1d1bf90-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.906391 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" event={"ID":"4ea66d5c-7325-440d-816c-c02db1d1bf90","Type":"ContainerDied","Data":"bf3ab7ca03b6be2b6e1ed22f347f6baa5991768af565f25995bf98140e7a34b8"} Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.906437 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3ab7ca03b6be2b6e1ed22f347f6baa5991768af565f25995bf98140e7a34b8" Feb 16 13:00:15 crc kubenswrapper[4799]: I0216 13:00:15.906446 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.274336 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz"] Feb 16 13:00:16 crc kubenswrapper[4799]: E0216 13:00:16.276208 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a4eb39-b079-4053-a166-8ca7a6987683" containerName="collect-profiles" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.276304 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a4eb39-b079-4053-a166-8ca7a6987683" containerName="collect-profiles" Feb 16 13:00:16 crc kubenswrapper[4799]: E0216 13:00:16.276397 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea66d5c-7325-440d-816c-c02db1d1bf90" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.276456 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea66d5c-7325-440d-816c-c02db1d1bf90" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.276880 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea66d5c-7325-440d-816c-c02db1d1bf90" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.276976 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a4eb39-b079-4053-a166-8ca7a6987683" containerName="collect-profiles" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.277739 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.280863 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.281336 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.281574 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.281826 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.284030 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz"] Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.950796 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsb4\" (UniqueName: \"kubernetes.io/projected/ceaa23db-d28e-4d2f-bf84-7336146bfb41-kube-api-access-wlsb4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.951367 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:16 crc kubenswrapper[4799]: I0216 13:00:16.955004 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.057275 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.057434 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsb4\" (UniqueName: \"kubernetes.io/projected/ceaa23db-d28e-4d2f-bf84-7336146bfb41-kube-api-access-wlsb4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.057540 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.063859 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.074598 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.081743 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsb4\" (UniqueName: \"kubernetes.io/projected/ceaa23db-d28e-4d2f-bf84-7336146bfb41-kube-api-access-wlsb4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.280032 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.810634 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz"] Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.820559 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:00:17 crc kubenswrapper[4799]: I0216 13:00:17.986585 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" event={"ID":"ceaa23db-d28e-4d2f-bf84-7336146bfb41","Type":"ContainerStarted","Data":"fbbcbf40a8a9cbb94ef7c45cf7d4df0aed8f26297558b16631dea2c98c05dc4b"} Feb 16 13:00:18 crc kubenswrapper[4799]: I0216 13:00:18.149428 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:00:18 crc kubenswrapper[4799]: E0216 13:00:18.149700 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:00:20 crc kubenswrapper[4799]: I0216 13:00:20.008597 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" event={"ID":"ceaa23db-d28e-4d2f-bf84-7336146bfb41","Type":"ContainerStarted","Data":"87004c3dedc9b351f7c1fd4ad875ba40088a6a49b38bb4d3190cde46a45b25e7"} Feb 16 13:00:20 crc kubenswrapper[4799]: I0216 13:00:20.049069 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" podStartSLOduration=2.93536163 podStartE2EDuration="4.049033736s" podCreationTimestamp="2026-02-16 13:00:16 +0000 UTC" firstStartedPulling="2026-02-16 13:00:17.820185442 +0000 UTC m=+1723.413200796" lastFinishedPulling="2026-02-16 13:00:18.933857568 +0000 UTC m=+1724.526872902" observedRunningTime="2026-02-16 13:00:20.028980075 +0000 UTC m=+1725.621995409" watchObservedRunningTime="2026-02-16 13:00:20.049033736 +0000 UTC m=+1725.642049110" Feb 16 13:00:30 crc kubenswrapper[4799]: I0216 13:00:30.149935 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:00:30 crc kubenswrapper[4799]: E0216 13:00:30.151697 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:00:34 crc kubenswrapper[4799]: I0216 13:00:34.046263 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mxmd5"] Feb 16 13:00:34 crc kubenswrapper[4799]: I0216 13:00:34.057737 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mxmd5"] Feb 16 13:00:35 crc kubenswrapper[4799]: I0216 13:00:35.163449 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff79791d-f33a-4986-9dd4-67c6af5bf747" path="/var/lib/kubelet/pods/ff79791d-f33a-4986-9dd4-67c6af5bf747/volumes" Feb 16 13:00:37 crc kubenswrapper[4799]: I0216 13:00:37.031386 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dvv2w"] Feb 16 13:00:37 crc kubenswrapper[4799]: I0216 13:00:37.044334 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dvv2w"] Feb 16 13:00:37 crc kubenswrapper[4799]: I0216 13:00:37.160487 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043950ea-86bb-464a-b829-8816123fe1cd" path="/var/lib/kubelet/pods/043950ea-86bb-464a-b829-8816123fe1cd/volumes" Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.042418 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2a53-account-create-update-zc78k"] Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.054665 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-l2t2n"] Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.064890 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-l2t2n"] Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.073665 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1745-account-create-update-h4zdh"] Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.082563 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2a53-account-create-update-zc78k"] Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.092460 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1745-account-create-update-h4zdh"] Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.162661 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201c84bc-cc45-471a-a86c-fe79ab2a2174" path="/var/lib/kubelet/pods/201c84bc-cc45-471a-a86c-fe79ab2a2174/volumes" Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.163282 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430af87d-ae1f-4b73-93e7-d8aa93192ae5" path="/var/lib/kubelet/pods/430af87d-ae1f-4b73-93e7-d8aa93192ae5/volumes" Feb 16 13:00:39 crc kubenswrapper[4799]: I0216 13:00:39.163939 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591accad-9c4d-4e29-bdf9-d673ed928210" path="/var/lib/kubelet/pods/591accad-9c4d-4e29-bdf9-d673ed928210/volumes" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.149971 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:00:41 crc kubenswrapper[4799]: E0216 13:00:41.150742 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.184230 4799 scope.go:117] "RemoveContainer" containerID="9d36bcf0e9b91e3d6eefd123eee0031ce1c5f0a0aa56b88ef64d8673381beb5f" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.224283 4799 scope.go:117] "RemoveContainer" containerID="bffd513b46ccab39835a381969309e0e3110475eb99e184756a9c39f61b14a9d" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.266604 4799 scope.go:117] "RemoveContainer" containerID="bfdffbf0c2790705c63dbf605ba7e8cd906ac7b132fe2364a598b5710bdfe6f3" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.318429 4799 scope.go:117] "RemoveContainer" containerID="94c71a38347edea2ea51aba3a544b5ad15abb91269fda2cd25c87a6bbce94efc" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.377851 4799 scope.go:117] "RemoveContainer" containerID="c7ac133f56cdafcd3d9cd734cffc7c6b5986212bfe920dfbea4cc4a49954355f" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.417331 4799 scope.go:117] "RemoveContainer" containerID="9c829cb841312ef1c6100b12c46d70ae8424e3889fac5f1c891d6206dd980ef3" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.463086 4799 scope.go:117] "RemoveContainer" containerID="d362b8456ce4a9e33524e5b966bfb8bf280acac31c39c28446516f704dd4edfc" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.495251 4799 scope.go:117] "RemoveContainer" containerID="e222cb2d9a4e02771095659a3b0aa4c20bb3797650c080c586ae6417dba4b1fe" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.515127 4799 scope.go:117] "RemoveContainer" containerID="3e9c09e275a75b9cafa94a282b53925801b0763f43ffbbfe5048316f689fe85b" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.535362 4799 scope.go:117] "RemoveContainer" containerID="ed45fc02ff39e9b6f6e6a25d8cc23fc9941e17bd530bec411fd19708e7df0a92" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.576877 4799 scope.go:117] "RemoveContainer" containerID="b4fba0d86739ea19470d37843f86d53dee7132cf6e59e5e24e668b8e835c12b6" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.605851 4799 scope.go:117] "RemoveContainer" containerID="107b6b0e33708aff2a2b76daf80c82607b7946be11c7ba535b5e2249ca2ea614" Feb 16 13:00:41 crc kubenswrapper[4799]: I0216 13:00:41.634856 4799 scope.go:117] "RemoveContainer" containerID="60424934eccdb7ea226c0c4e856a1cd68af7617cbb4e2361bbf6f10f0c951a6e" Feb 16 13:00:43 crc kubenswrapper[4799]: I0216 13:00:43.031703 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4e39-account-create-update-kwmxq"] Feb 16 13:00:43 crc kubenswrapper[4799]: I0216 13:00:43.041582 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4e39-account-create-update-kwmxq"] Feb 16 13:00:43 crc kubenswrapper[4799]: I0216 13:00:43.159031 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f783521e-3e89-4fc3-bdb6-08bc1ee82739" path="/var/lib/kubelet/pods/f783521e-3e89-4fc3-bdb6-08bc1ee82739/volumes" Feb 16 13:00:49 crc kubenswrapper[4799]: I0216 13:00:49.030612 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4hjhh"] Feb 16 13:00:49 crc kubenswrapper[4799]: I0216 13:00:49.045939 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4hjhh"] Feb 16 13:00:49 crc kubenswrapper[4799]: I0216 13:00:49.162187 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90f7da9-52e1-4369-a123-145ec31299db" path="/var/lib/kubelet/pods/f90f7da9-52e1-4369-a123-145ec31299db/volumes" Feb 16 13:00:53 crc kubenswrapper[4799]: I0216 13:00:53.041725 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zq5qf"] Feb 16 13:00:53 crc kubenswrapper[4799]: I0216 13:00:53.053122 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zq5qf"] Feb 16 13:00:53 crc kubenswrapper[4799]: I0216 13:00:53.163251 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4b5dce-a5b1-4372-8138-03e7d62b9772" path="/var/lib/kubelet/pods/0e4b5dce-a5b1-4372-8138-03e7d62b9772/volumes" Feb 16 13:00:55 crc kubenswrapper[4799]: I0216 13:00:55.168341 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:00:55 crc kubenswrapper[4799]: E0216 13:00:55.168642 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:00:57 crc kubenswrapper[4799]: I0216 13:00:57.028878 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-2x28q"] Feb 16 13:00:57 crc kubenswrapper[4799]: I0216 13:00:57.037281 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-2x28q"] Feb 16 13:00:57 crc kubenswrapper[4799]: I0216 13:00:57.164447 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762cb41d-d3c9-4b97-bdbf-7062f65fba96" path="/var/lib/kubelet/pods/762cb41d-d3c9-4b97-bdbf-7062f65fba96/volumes" Feb 16 13:00:58 crc kubenswrapper[4799]: I0216 13:00:58.032613 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8scmb"] Feb 16 13:00:58 crc kubenswrapper[4799]: I0216 13:00:58.043626 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8scmb"] Feb 16 13:00:59 crc kubenswrapper[4799]: I0216 13:00:59.164484 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1144c46a-41c9-4032-8811-2b3c930586f9" path="/var/lib/kubelet/pods/1144c46a-41c9-4032-8811-2b3c930586f9/volumes" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.151797 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520781-vcnnm"] Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.153088 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.169289 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520781-vcnnm"] Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.237768 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-fernet-keys\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.237934 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-combined-ca-bundle\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.237970 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslwm\" (UniqueName: \"kubernetes.io/projected/2a2944ce-d43d-455d-81c0-21e082c4c544-kube-api-access-qslwm\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.238027 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-config-data\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.340150 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-fernet-keys\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.340279 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-combined-ca-bundle\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.340372 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslwm\" (UniqueName: \"kubernetes.io/projected/2a2944ce-d43d-455d-81c0-21e082c4c544-kube-api-access-qslwm\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.340429 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-config-data\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.348247 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-fernet-keys\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.355235 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-config-data\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.357418 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-combined-ca-bundle\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.359522 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslwm\" (UniqueName: \"kubernetes.io/projected/2a2944ce-d43d-455d-81c0-21e082c4c544-kube-api-access-qslwm\") pod \"keystone-cron-29520781-vcnnm\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:00 crc kubenswrapper[4799]: I0216 13:01:00.556044 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:01 crc kubenswrapper[4799]: I0216 13:01:01.033037 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520781-vcnnm"] Feb 16 13:01:01 crc kubenswrapper[4799]: I0216 13:01:01.522376 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520781-vcnnm" event={"ID":"2a2944ce-d43d-455d-81c0-21e082c4c544","Type":"ContainerStarted","Data":"a50b6822387a5729232c7d777326d2fd8ba9fb65a1373e1ac9bfa6b3362049d5"} Feb 16 13:01:02 crc kubenswrapper[4799]: I0216 13:01:02.531315 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520781-vcnnm" event={"ID":"2a2944ce-d43d-455d-81c0-21e082c4c544","Type":"ContainerStarted","Data":"d761694d1ff48c636ea916846655969469b59e05b3e468d53d85531dafc1182c"} Feb 16 13:01:02 crc kubenswrapper[4799]: I0216 13:01:02.556358 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520781-vcnnm" podStartSLOduration=2.556332266 podStartE2EDuration="2.556332266s" podCreationTimestamp="2026-02-16 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:01:02.550998844 +0000 UTC m=+1768.144014188" watchObservedRunningTime="2026-02-16 13:01:02.556332266 +0000 UTC m=+1768.149347600" Feb 16 13:01:05 crc kubenswrapper[4799]: I0216 13:01:05.562269 4799 generic.go:334] "Generic (PLEG): container finished" podID="2a2944ce-d43d-455d-81c0-21e082c4c544" containerID="d761694d1ff48c636ea916846655969469b59e05b3e468d53d85531dafc1182c" exitCode=0 Feb 16 13:01:05 crc kubenswrapper[4799]: I0216 13:01:05.562350 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520781-vcnnm" event={"ID":"2a2944ce-d43d-455d-81c0-21e082c4c544","Type":"ContainerDied","Data":"d761694d1ff48c636ea916846655969469b59e05b3e468d53d85531dafc1182c"} Feb 16 13:01:06 crc kubenswrapper[4799]: I0216 13:01:06.956666 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.073940 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qslwm\" (UniqueName: \"kubernetes.io/projected/2a2944ce-d43d-455d-81c0-21e082c4c544-kube-api-access-qslwm\") pod \"2a2944ce-d43d-455d-81c0-21e082c4c544\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.075240 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-config-data\") pod \"2a2944ce-d43d-455d-81c0-21e082c4c544\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.075328 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-combined-ca-bundle\") pod \"2a2944ce-d43d-455d-81c0-21e082c4c544\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.075433 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-fernet-keys\") pod \"2a2944ce-d43d-455d-81c0-21e082c4c544\" (UID: \"2a2944ce-d43d-455d-81c0-21e082c4c544\") " Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.092380 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2a2944ce-d43d-455d-81c0-21e082c4c544" (UID: "2a2944ce-d43d-455d-81c0-21e082c4c544"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.138854 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2944ce-d43d-455d-81c0-21e082c4c544-kube-api-access-qslwm" (OuterVolumeSpecName: "kube-api-access-qslwm") pod "2a2944ce-d43d-455d-81c0-21e082c4c544" (UID: "2a2944ce-d43d-455d-81c0-21e082c4c544"). InnerVolumeSpecName "kube-api-access-qslwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.143838 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a2944ce-d43d-455d-81c0-21e082c4c544" (UID: "2a2944ce-d43d-455d-81c0-21e082c4c544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.165996 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-config-data" (OuterVolumeSpecName: "config-data") pod "2a2944ce-d43d-455d-81c0-21e082c4c544" (UID: "2a2944ce-d43d-455d-81c0-21e082c4c544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.182898 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qslwm\" (UniqueName: \"kubernetes.io/projected/2a2944ce-d43d-455d-81c0-21e082c4c544-kube-api-access-qslwm\") on node \"crc\" DevicePath \"\"" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.182937 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.182953 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.182966 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a2944ce-d43d-455d-81c0-21e082c4c544-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.588292 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520781-vcnnm" event={"ID":"2a2944ce-d43d-455d-81c0-21e082c4c544","Type":"ContainerDied","Data":"a50b6822387a5729232c7d777326d2fd8ba9fb65a1373e1ac9bfa6b3362049d5"} Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.588339 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a50b6822387a5729232c7d777326d2fd8ba9fb65a1373e1ac9bfa6b3362049d5" Feb 16 13:01:07 crc kubenswrapper[4799]: I0216 13:01:07.588406 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520781-vcnnm" Feb 16 13:01:08 crc kubenswrapper[4799]: I0216 13:01:08.149264 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:01:08 crc kubenswrapper[4799]: E0216 13:01:08.149728 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:01:10 crc kubenswrapper[4799]: I0216 13:01:10.935913 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7f54946f5f-2jrb5" podUID="441c04e7-2794-48cf-bc03-4c13536d22c4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 13:01:19 crc kubenswrapper[4799]: I0216 13:01:19.150427 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:01:19 crc kubenswrapper[4799]: E0216 13:01:19.151544 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:01:33 crc kubenswrapper[4799]: I0216 13:01:33.149491 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:01:33 crc kubenswrapper[4799]: E0216 13:01:33.150596 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:01:38 crc kubenswrapper[4799]: I0216 13:01:38.040411 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-j8vxl"] Feb 16 13:01:38 crc kubenswrapper[4799]: I0216 13:01:38.050846 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-j8vxl"] Feb 16 13:01:39 crc kubenswrapper[4799]: I0216 13:01:39.161555 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407468d3-5baf-4bde-af39-679ed83889c8" path="/var/lib/kubelet/pods/407468d3-5baf-4bde-af39-679ed83889c8/volumes" Feb 16 13:01:41 crc kubenswrapper[4799]: I0216 13:01:41.891491 4799 scope.go:117] "RemoveContainer" containerID="dbd1aff9ab43870c091e16e6f445ec17ae85f722d36250d5a7919d6c6660c5a0" Feb 16 13:01:41 crc kubenswrapper[4799]: I0216 13:01:41.926511 4799 scope.go:117] "RemoveContainer" containerID="fa6c1b8da983e0dee2d661b347a35553f1dac406a6246a771e9f1cd59eb8dbea" Feb 16 13:01:41 crc kubenswrapper[4799]: I0216 13:01:41.990713 4799 scope.go:117] "RemoveContainer" containerID="74e068cc325d38f01767ba58b734ee12d0b5d551a43aa3c5c1bb06d2568968a4" Feb 16 13:01:42 crc kubenswrapper[4799]: I0216 13:01:42.024321 4799 scope.go:117] "RemoveContainer" containerID="83bd9a072dc28001f442eaaa9890fb65092b985458e5ad1a553a1ad026e23036" Feb 16 13:01:42 crc kubenswrapper[4799]: I0216 13:01:42.076800 4799 scope.go:117] "RemoveContainer" containerID="16503752f0b490ed00baa9558939b9055ee4a233d05c6d184de771a79644cf49" Feb 16 13:01:42 crc kubenswrapper[4799]: I0216 13:01:42.136857 4799 scope.go:117] "RemoveContainer" containerID="795813fdf916b015d629fdd8271f5a0b2197c8ab7f8bc40338e11983626ec8ad" Feb 16 13:01:44 crc kubenswrapper[4799]: I0216 13:01:44.149693 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:01:44 crc kubenswrapper[4799]: E0216 13:01:44.150295 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.053023 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bgzm8"] Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.072682 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rczq6"] Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.081526 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bgzm8"] Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.089757 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rczq6"] Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.157526 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:01:55 crc kubenswrapper[4799]: E0216 13:01:55.157867 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.162044 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cbd43b-bc5a-4954-aa6f-1cb9440076a9" path="/var/lib/kubelet/pods/03cbd43b-bc5a-4954-aa6f-1cb9440076a9/volumes" Feb 16 13:01:55 crc kubenswrapper[4799]: I0216 13:01:55.163097 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea741e8-2ce9-47a5-a56f-c4ede0af0124" path="/var/lib/kubelet/pods/2ea741e8-2ce9-47a5-a56f-c4ede0af0124/volumes" Feb 16 13:01:59 crc kubenswrapper[4799]: I0216 13:01:59.246926 4799 generic.go:334] "Generic (PLEG): container finished" podID="ceaa23db-d28e-4d2f-bf84-7336146bfb41" containerID="87004c3dedc9b351f7c1fd4ad875ba40088a6a49b38bb4d3190cde46a45b25e7" exitCode=0 Feb 16 13:01:59 crc kubenswrapper[4799]: I0216 13:01:59.247039 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" event={"ID":"ceaa23db-d28e-4d2f-bf84-7336146bfb41","Type":"ContainerDied","Data":"87004c3dedc9b351f7c1fd4ad875ba40088a6a49b38bb4d3190cde46a45b25e7"} Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.704914 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.847634 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-inventory\") pod \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.847701 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsb4\" (UniqueName: \"kubernetes.io/projected/ceaa23db-d28e-4d2f-bf84-7336146bfb41-kube-api-access-wlsb4\") pod \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.847831 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-ssh-key-openstack-edpm-ipam\") pod \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\" (UID: \"ceaa23db-d28e-4d2f-bf84-7336146bfb41\") " Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.853745 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceaa23db-d28e-4d2f-bf84-7336146bfb41-kube-api-access-wlsb4" (OuterVolumeSpecName: "kube-api-access-wlsb4") pod "ceaa23db-d28e-4d2f-bf84-7336146bfb41" (UID: "ceaa23db-d28e-4d2f-bf84-7336146bfb41"). InnerVolumeSpecName "kube-api-access-wlsb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.883346 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-inventory" (OuterVolumeSpecName: "inventory") pod "ceaa23db-d28e-4d2f-bf84-7336146bfb41" (UID: "ceaa23db-d28e-4d2f-bf84-7336146bfb41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.887734 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ceaa23db-d28e-4d2f-bf84-7336146bfb41" (UID: "ceaa23db-d28e-4d2f-bf84-7336146bfb41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.949918 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.949957 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsb4\" (UniqueName: \"kubernetes.io/projected/ceaa23db-d28e-4d2f-bf84-7336146bfb41-kube-api-access-wlsb4\") on node \"crc\" DevicePath \"\"" Feb 16 13:02:00 crc kubenswrapper[4799]: I0216 13:02:00.949969 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ceaa23db-d28e-4d2f-bf84-7336146bfb41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.314521 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" event={"ID":"ceaa23db-d28e-4d2f-bf84-7336146bfb41","Type":"ContainerDied","Data":"fbbcbf40a8a9cbb94ef7c45cf7d4df0aed8f26297558b16631dea2c98c05dc4b"} Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.314577 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbbcbf40a8a9cbb94ef7c45cf7d4df0aed8f26297558b16631dea2c98c05dc4b" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.314654 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.433857 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf"] Feb 16 13:02:01 crc kubenswrapper[4799]: E0216 13:02:01.435100 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a2944ce-d43d-455d-81c0-21e082c4c544" containerName="keystone-cron" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.435140 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2944ce-d43d-455d-81c0-21e082c4c544" containerName="keystone-cron" Feb 16 13:02:01 crc kubenswrapper[4799]: E0216 13:02:01.435176 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceaa23db-d28e-4d2f-bf84-7336146bfb41" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.435187 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceaa23db-d28e-4d2f-bf84-7336146bfb41" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.435433 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceaa23db-d28e-4d2f-bf84-7336146bfb41" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.435466 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2944ce-d43d-455d-81c0-21e082c4c544" containerName="keystone-cron" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.437453 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.441794 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.459626 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.459672 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.459719 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.469551 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf"] Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.564623 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.564751 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.564848 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzzv\" (UniqueName: \"kubernetes.io/projected/e8cd035a-4f87-419c-994a-1ab09e6da101-kube-api-access-ttzzv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.666455 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.666599 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzzv\" (UniqueName: \"kubernetes.io/projected/e8cd035a-4f87-419c-994a-1ab09e6da101-kube-api-access-ttzzv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.666674 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.677448 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.677584 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.694687 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzzv\" (UniqueName: \"kubernetes.io/projected/e8cd035a-4f87-419c-994a-1ab09e6da101-kube-api-access-ttzzv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:01 crc kubenswrapper[4799]: I0216 13:02:01.785519 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:02:02 crc kubenswrapper[4799]: I0216 13:02:02.401740 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf"] Feb 16 13:02:03 crc kubenswrapper[4799]: I0216 13:02:03.341784 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" event={"ID":"e8cd035a-4f87-419c-994a-1ab09e6da101","Type":"ContainerStarted","Data":"14a52e50f745dae73095a9624f55a5dbc41ebb52e668e4b2b00e259f3848e61a"} Feb 16 13:02:03 crc kubenswrapper[4799]: I0216 13:02:03.341835 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" event={"ID":"e8cd035a-4f87-419c-994a-1ab09e6da101","Type":"ContainerStarted","Data":"09f223e9b27c0150582cae451b07930b13fb357eb907f5ae3c424a282e3a88c1"} Feb 16 13:02:03 crc kubenswrapper[4799]: I0216 13:02:03.358945 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" podStartSLOduration=1.950270872 podStartE2EDuration="2.358917277s" podCreationTimestamp="2026-02-16 13:02:01 +0000 UTC" firstStartedPulling="2026-02-16 13:02:02.40520149 +0000 UTC m=+1827.998216814" lastFinishedPulling="2026-02-16 13:02:02.813847875 +0000 UTC m=+1828.406863219" observedRunningTime="2026-02-16 13:02:03.35516351 +0000 UTC m=+1828.948178834" watchObservedRunningTime="2026-02-16 13:02:03.358917277 +0000 UTC m=+1828.951932631" Feb 16 13:02:07 crc kubenswrapper[4799]: I0216 13:02:07.149653 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:02:07 crc kubenswrapper[4799]: E0216 13:02:07.150332 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:02:11 crc kubenswrapper[4799]: I0216 13:02:11.035436 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x2bbw"] Feb 16 13:02:11 crc kubenswrapper[4799]: I0216 13:02:11.050895 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x2bbw"] Feb 16 13:02:11 crc kubenswrapper[4799]: I0216 13:02:11.161709 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e821341e-3e99-4606-a96d-00adad2f39fb" path="/var/lib/kubelet/pods/e821341e-3e99-4606-a96d-00adad2f39fb/volumes" Feb 16 13:02:14 crc kubenswrapper[4799]: I0216 13:02:14.033948 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-m5dfr"] Feb 16 13:02:14 crc kubenswrapper[4799]: I0216 13:02:14.048571 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-m5dfr"] Feb 16 13:02:15 crc kubenswrapper[4799]: I0216 13:02:15.160792 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3d6bd7-bfe0-4951-8c70-ae25e5a07930" path="/var/lib/kubelet/pods/8e3d6bd7-bfe0-4951-8c70-ae25e5a07930/volumes" Feb 16 13:02:21 crc kubenswrapper[4799]: I0216 13:02:21.149779 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:02:21 crc kubenswrapper[4799]: E0216 13:02:21.150772 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:02:35 crc kubenswrapper[4799]: I0216 13:02:35.163418 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:02:35 crc kubenswrapper[4799]: E0216 13:02:35.164538 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:02:42 crc kubenswrapper[4799]: I0216 13:02:42.269183 4799 scope.go:117] "RemoveContainer" containerID="91337eefa295f64051763829d7f6722f895d9a52e33c40165438fd5a03064cd4" Feb 16 13:02:42 crc kubenswrapper[4799]: I0216 13:02:42.321034 4799 scope.go:117] "RemoveContainer" containerID="2cbc5e9ccb2b67c6a42b08a6f389487791bf15e7cebafcfbed12fa66596e62d7" Feb 16 13:02:42 crc kubenswrapper[4799]: I0216 13:02:42.379635 4799 scope.go:117] "RemoveContainer" containerID="497721a037daa43af30da6128b2c50671ea4cdfc4bf35f240def5332dea09e29" Feb 16 13:02:42 crc kubenswrapper[4799]: I0216 13:02:42.435977 4799 scope.go:117] "RemoveContainer" containerID="b52f75425facafb9dc4b8fa9b64e8b925694f38305b240d8d0425e375afb915e" Feb 16 13:02:50 crc kubenswrapper[4799]: I0216 13:02:50.150427 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:02:50 crc kubenswrapper[4799]: E0216 13:02:50.152676 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:03:04 crc kubenswrapper[4799]: I0216 13:03:04.149904 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:03:04 crc kubenswrapper[4799]: E0216 13:03:04.150845 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:03:11 crc kubenswrapper[4799]: I0216 13:03:11.990653 4799 generic.go:334] "Generic (PLEG): container finished" podID="e8cd035a-4f87-419c-994a-1ab09e6da101" containerID="14a52e50f745dae73095a9624f55a5dbc41ebb52e668e4b2b00e259f3848e61a" exitCode=0 Feb 16 13:03:11 crc kubenswrapper[4799]: I0216 13:03:11.990846 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" event={"ID":"e8cd035a-4f87-419c-994a-1ab09e6da101","Type":"ContainerDied","Data":"14a52e50f745dae73095a9624f55a5dbc41ebb52e668e4b2b00e259f3848e61a"} Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.420235 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.625587 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-inventory\") pod \"e8cd035a-4f87-419c-994a-1ab09e6da101\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.625947 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-ssh-key-openstack-edpm-ipam\") pod \"e8cd035a-4f87-419c-994a-1ab09e6da101\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.626022 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttzzv\" (UniqueName: \"kubernetes.io/projected/e8cd035a-4f87-419c-994a-1ab09e6da101-kube-api-access-ttzzv\") pod \"e8cd035a-4f87-419c-994a-1ab09e6da101\" (UID: \"e8cd035a-4f87-419c-994a-1ab09e6da101\") " Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.632285 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cd035a-4f87-419c-994a-1ab09e6da101-kube-api-access-ttzzv" (OuterVolumeSpecName: "kube-api-access-ttzzv") pod "e8cd035a-4f87-419c-994a-1ab09e6da101" (UID: "e8cd035a-4f87-419c-994a-1ab09e6da101"). InnerVolumeSpecName "kube-api-access-ttzzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.660978 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8cd035a-4f87-419c-994a-1ab09e6da101" (UID: "e8cd035a-4f87-419c-994a-1ab09e6da101"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.661577 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-inventory" (OuterVolumeSpecName: "inventory") pod "e8cd035a-4f87-419c-994a-1ab09e6da101" (UID: "e8cd035a-4f87-419c-994a-1ab09e6da101"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.728391 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.728710 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttzzv\" (UniqueName: \"kubernetes.io/projected/e8cd035a-4f87-419c-994a-1ab09e6da101-kube-api-access-ttzzv\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:13 crc kubenswrapper[4799]: I0216 13:03:13.728791 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8cd035a-4f87-419c-994a-1ab09e6da101-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.008970 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" event={"ID":"e8cd035a-4f87-419c-994a-1ab09e6da101","Type":"ContainerDied","Data":"09f223e9b27c0150582cae451b07930b13fb357eb907f5ae3c424a282e3a88c1"} Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.009023 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f223e9b27c0150582cae451b07930b13fb357eb907f5ae3c424a282e3a88c1" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.009096 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.123000 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27"] Feb 16 13:03:14 crc kubenswrapper[4799]: E0216 13:03:14.123756 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cd035a-4f87-419c-994a-1ab09e6da101" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.123787 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cd035a-4f87-419c-994a-1ab09e6da101" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.124061 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cd035a-4f87-419c-994a-1ab09e6da101" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.125013 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.128608 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.129279 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.129357 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.130406 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.136705 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.136922 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jf5f\" (UniqueName: \"kubernetes.io/projected/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-kube-api-access-6jf5f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.137113 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.152685 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27"] Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.239766 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.241181 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jf5f\" (UniqueName: \"kubernetes.io/projected/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-kube-api-access-6jf5f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.241364 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.245073 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.264041 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jf5f\" (UniqueName: \"kubernetes.io/projected/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-kube-api-access-6jf5f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.268819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjr27\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:14 crc kubenswrapper[4799]: I0216 13:03:14.474170 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:15 crc kubenswrapper[4799]: I0216 13:03:15.077092 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27"] Feb 16 13:03:16 crc kubenswrapper[4799]: I0216 13:03:16.030203 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" event={"ID":"9dd7738f-7fe5-4522-94a5-afa6cf94a54d","Type":"ContainerStarted","Data":"d4cde6ce3b323498b707d8d4d54d601d121db858c9019637f19a482be1fd2b7a"} Feb 16 13:03:16 crc kubenswrapper[4799]: I0216 13:03:16.030823 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" event={"ID":"9dd7738f-7fe5-4522-94a5-afa6cf94a54d","Type":"ContainerStarted","Data":"610c20cb4fff7abcf6ff0bff57b4bb9ecdaa7f0fc8b1da24f6b83eb907709519"} Feb 16 13:03:16 crc kubenswrapper[4799]: I0216 13:03:16.048458 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" podStartSLOduration=1.421246087 podStartE2EDuration="2.048437699s" podCreationTimestamp="2026-02-16 13:03:14 +0000 UTC" firstStartedPulling="2026-02-16 13:03:15.082471553 +0000 UTC m=+1900.675486887" lastFinishedPulling="2026-02-16 13:03:15.709663165 +0000 UTC m=+1901.302678499" observedRunningTime="2026-02-16 13:03:16.044998261 +0000 UTC m=+1901.638013605" watchObservedRunningTime="2026-02-16 13:03:16.048437699 +0000 UTC m=+1901.641453043" Feb 16 13:03:18 crc kubenswrapper[4799]: I0216 13:03:18.046794 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-523c-account-create-update-ldgmh"] Feb 16 13:03:18 crc kubenswrapper[4799]: I0216 13:03:18.056486 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2j7p7"] Feb 16 13:03:18 crc kubenswrapper[4799]: I0216 13:03:18.064712 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-523c-account-create-update-ldgmh"] Feb 16 13:03:18 crc kubenswrapper[4799]: I0216 13:03:18.074480 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0b02-account-create-update-pbs7h"] Feb 16 13:03:18 crc kubenswrapper[4799]: I0216 13:03:18.083515 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2j7p7"] Feb 16 13:03:18 crc kubenswrapper[4799]: I0216 13:03:18.094184 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0b02-account-create-update-pbs7h"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.028617 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-378d-account-create-update-sjsz5"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.048259 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cmgtj"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.058961 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cmgtj"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.065563 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qqjbv"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.074966 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qqjbv"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.085403 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-378d-account-create-update-sjsz5"] Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.149641 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:03:19 crc kubenswrapper[4799]: E0216 13:03:19.150277 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.160350 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fe0ab4-e31e-46ec-9e5e-d806b8423138" path="/var/lib/kubelet/pods/36fe0ab4-e31e-46ec-9e5e-d806b8423138/volumes" Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.160959 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f27d260-32a5-4071-b01e-5674ddf856ec" path="/var/lib/kubelet/pods/3f27d260-32a5-4071-b01e-5674ddf856ec/volumes" Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.161576 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2be1ba0-aac6-4d75-a35f-31ba41b971d5" path="/var/lib/kubelet/pods/b2be1ba0-aac6-4d75-a35f-31ba41b971d5/volumes" Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.162113 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db161b46-fe7a-4bd4-826b-052cbcef338f" path="/var/lib/kubelet/pods/db161b46-fe7a-4bd4-826b-052cbcef338f/volumes" Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.163198 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e827d55c-315b-4615-bddb-71bef534c284" path="/var/lib/kubelet/pods/e827d55c-315b-4615-bddb-71bef534c284/volumes" Feb 16 13:03:19 crc kubenswrapper[4799]: I0216 13:03:19.163792 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d" path="/var/lib/kubelet/pods/e82b1d18-d7d9-4af2-bdb0-b5f31aafc20d/volumes" Feb 16 13:03:21 crc kubenswrapper[4799]: I0216 13:03:21.084854 4799 generic.go:334] "Generic (PLEG): container finished" podID="9dd7738f-7fe5-4522-94a5-afa6cf94a54d" containerID="d4cde6ce3b323498b707d8d4d54d601d121db858c9019637f19a482be1fd2b7a" exitCode=0 Feb 16 13:03:21 crc kubenswrapper[4799]: I0216 13:03:21.084943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" event={"ID":"9dd7738f-7fe5-4522-94a5-afa6cf94a54d","Type":"ContainerDied","Data":"d4cde6ce3b323498b707d8d4d54d601d121db858c9019637f19a482be1fd2b7a"} Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.525042 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.621975 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-ssh-key-openstack-edpm-ipam\") pod \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.622306 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jf5f\" (UniqueName: \"kubernetes.io/projected/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-kube-api-access-6jf5f\") pod \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.623109 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-inventory\") pod \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\" (UID: \"9dd7738f-7fe5-4522-94a5-afa6cf94a54d\") " Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.628857 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-kube-api-access-6jf5f" (OuterVolumeSpecName: "kube-api-access-6jf5f") pod "9dd7738f-7fe5-4522-94a5-afa6cf94a54d" (UID: "9dd7738f-7fe5-4522-94a5-afa6cf94a54d"). InnerVolumeSpecName "kube-api-access-6jf5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.654715 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-inventory" (OuterVolumeSpecName: "inventory") pod "9dd7738f-7fe5-4522-94a5-afa6cf94a54d" (UID: "9dd7738f-7fe5-4522-94a5-afa6cf94a54d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.674363 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9dd7738f-7fe5-4522-94a5-afa6cf94a54d" (UID: "9dd7738f-7fe5-4522-94a5-afa6cf94a54d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.725164 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.725207 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:22 crc kubenswrapper[4799]: I0216 13:03:22.725219 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jf5f\" (UniqueName: \"kubernetes.io/projected/9dd7738f-7fe5-4522-94a5-afa6cf94a54d-kube-api-access-6jf5f\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.108108 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" event={"ID":"9dd7738f-7fe5-4522-94a5-afa6cf94a54d","Type":"ContainerDied","Data":"610c20cb4fff7abcf6ff0bff57b4bb9ecdaa7f0fc8b1da24f6b83eb907709519"} Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.108175 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610c20cb4fff7abcf6ff0bff57b4bb9ecdaa7f0fc8b1da24f6b83eb907709519" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.108249 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjr27" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.186112 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7"] Feb 16 13:03:23 crc kubenswrapper[4799]: E0216 13:03:23.186903 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd7738f-7fe5-4522-94a5-afa6cf94a54d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.187007 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd7738f-7fe5-4522-94a5-afa6cf94a54d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.187282 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd7738f-7fe5-4522-94a5-afa6cf94a54d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.188286 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.192994 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.193074 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.193494 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.194031 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.219650 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7"] Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.337834 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.337895 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.338419 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/ff2369e0-1189-4a8f-abca-c8db832a8e8c-kube-api-access-d5zxk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.440333 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.440391 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.440624 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/ff2369e0-1189-4a8f-abca-c8db832a8e8c-kube-api-access-d5zxk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.445794 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.445920 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.462760 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/ff2369e0-1189-4a8f-abca-c8db832a8e8c-kube-api-access-d5zxk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-thrw7\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:23 crc kubenswrapper[4799]: I0216 13:03:23.512976 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:03:24 crc kubenswrapper[4799]: I0216 13:03:24.049615 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7"] Feb 16 13:03:24 crc kubenswrapper[4799]: I0216 13:03:24.119716 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" event={"ID":"ff2369e0-1189-4a8f-abca-c8db832a8e8c","Type":"ContainerStarted","Data":"eaf215c1c75a508b1423fff330c6f0597ebc1eb90389cd10e4a9b19c7a79f278"} Feb 16 13:03:26 crc kubenswrapper[4799]: I0216 13:03:26.138890 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" event={"ID":"ff2369e0-1189-4a8f-abca-c8db832a8e8c","Type":"ContainerStarted","Data":"acbf004f7c4f8fa02c83c95e0a3e1166494d5a6ecf73bdb4eddd8ee64cd09666"} Feb 16 13:03:32 crc kubenswrapper[4799]: I0216 13:03:32.149323 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:03:32 crc kubenswrapper[4799]: E0216 13:03:32.150118 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:03:42 crc kubenswrapper[4799]: I0216 13:03:42.575105 4799 scope.go:117] "RemoveContainer" containerID="4ef2d75c52641ba881694cedc7579fa2cacc77fe30ba7c7be4450f8d720c268c" Feb 16 13:03:42 crc kubenswrapper[4799]: I0216 13:03:42.620588 4799 scope.go:117] "RemoveContainer" containerID="399ed703e0088bb71a40985c8e04235e692594d2e384f0dc895d67186f47f1de" Feb 16 13:03:42 crc kubenswrapper[4799]: I0216 13:03:42.671710 4799 scope.go:117] "RemoveContainer" containerID="36423b169d8031e33fcf223682e5abfd19f1e2465c3295f2fe025d97c32be5b5" Feb 16 13:03:42 crc kubenswrapper[4799]: I0216 13:03:42.721324 4799 scope.go:117] "RemoveContainer" containerID="79f693d265a2142285cebcddd5a5f46075ebfe497bbbc8ceb870e9b848ae7a28" Feb 16 13:03:42 crc kubenswrapper[4799]: I0216 13:03:42.773912 4799 scope.go:117] "RemoveContainer" containerID="d803bea1cd9c8673e2dfabb749747cac89d07d53e8122df4ce16a4ab73dc0994" Feb 16 13:03:42 crc kubenswrapper[4799]: I0216 13:03:42.836365 4799 scope.go:117] "RemoveContainer" containerID="f9df83a5d1c04e808d4d590a0b9a71370a73e735e110822ca0ca7b8014bf2552" Feb 16 13:03:45 crc kubenswrapper[4799]: I0216 13:03:45.162838 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:03:45 crc kubenswrapper[4799]: E0216 13:03:45.163673 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:03:51 crc kubenswrapper[4799]: I0216 13:03:51.037635 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" podStartSLOduration=26.617126413 podStartE2EDuration="28.037614281s" podCreationTimestamp="2026-02-16 13:03:23 +0000 UTC" firstStartedPulling="2026-02-16 13:03:24.05056561 +0000 UTC m=+1909.643580944" lastFinishedPulling="2026-02-16 13:03:25.471053478 +0000 UTC m=+1911.064068812" observedRunningTime="2026-02-16 13:03:26.160197077 +0000 UTC m=+1911.753212411" watchObservedRunningTime="2026-02-16 13:03:51.037614281 +0000 UTC m=+1936.630629625" Feb 16 13:03:51 crc kubenswrapper[4799]: I0216 13:03:51.051102 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b5wng"] Feb 16 13:03:51 crc kubenswrapper[4799]: I0216 13:03:51.061973 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-b5wng"] Feb 16 13:03:51 crc kubenswrapper[4799]: I0216 13:03:51.161518 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3e4608-cd26-490c-b994-45e90311e4bc" path="/var/lib/kubelet/pods/4d3e4608-cd26-490c-b994-45e90311e4bc/volumes" Feb 16 13:03:57 crc kubenswrapper[4799]: I0216 13:03:57.150372 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:03:57 crc kubenswrapper[4799]: E0216 13:03:57.151408 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:04:03 crc kubenswrapper[4799]: I0216 13:04:03.511494 4799 generic.go:334] "Generic (PLEG): container finished" podID="ff2369e0-1189-4a8f-abca-c8db832a8e8c" containerID="acbf004f7c4f8fa02c83c95e0a3e1166494d5a6ecf73bdb4eddd8ee64cd09666" exitCode=0 Feb 16 13:04:03 crc kubenswrapper[4799]: I0216 13:04:03.511590 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" event={"ID":"ff2369e0-1189-4a8f-abca-c8db832a8e8c","Type":"ContainerDied","Data":"acbf004f7c4f8fa02c83c95e0a3e1166494d5a6ecf73bdb4eddd8ee64cd09666"} Feb 16 13:04:04 crc kubenswrapper[4799]: I0216 13:04:04.988838 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.050196 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/ff2369e0-1189-4a8f-abca-c8db832a8e8c-kube-api-access-d5zxk\") pod \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.050266 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-ssh-key-openstack-edpm-ipam\") pod \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.050421 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-inventory\") pod \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\" (UID: \"ff2369e0-1189-4a8f-abca-c8db832a8e8c\") " Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.058770 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2369e0-1189-4a8f-abca-c8db832a8e8c-kube-api-access-d5zxk" (OuterVolumeSpecName: "kube-api-access-d5zxk") pod "ff2369e0-1189-4a8f-abca-c8db832a8e8c" (UID: "ff2369e0-1189-4a8f-abca-c8db832a8e8c"). InnerVolumeSpecName "kube-api-access-d5zxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.087639 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff2369e0-1189-4a8f-abca-c8db832a8e8c" (UID: "ff2369e0-1189-4a8f-abca-c8db832a8e8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.090471 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-inventory" (OuterVolumeSpecName: "inventory") pod "ff2369e0-1189-4a8f-abca-c8db832a8e8c" (UID: "ff2369e0-1189-4a8f-abca-c8db832a8e8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.152627 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/ff2369e0-1189-4a8f-abca-c8db832a8e8c-kube-api-access-d5zxk\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.152832 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.152924 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff2369e0-1189-4a8f-abca-c8db832a8e8c-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.531763 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" event={"ID":"ff2369e0-1189-4a8f-abca-c8db832a8e8c","Type":"ContainerDied","Data":"eaf215c1c75a508b1423fff330c6f0597ebc1eb90389cd10e4a9b19c7a79f278"} Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.532324 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf215c1c75a508b1423fff330c6f0597ebc1eb90389cd10e4a9b19c7a79f278" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.531843 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-thrw7" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.641290 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5"] Feb 16 13:04:05 crc kubenswrapper[4799]: E0216 13:04:05.642220 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2369e0-1189-4a8f-abca-c8db832a8e8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.642254 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2369e0-1189-4a8f-abca-c8db832a8e8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.642588 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2369e0-1189-4a8f-abca-c8db832a8e8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.643941 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.646346 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.646380 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.646815 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.648751 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.651902 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5"] Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.768417 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.768573 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.768641 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jm6z\" (UniqueName: \"kubernetes.io/projected/db459b41-b7ab-4982-8889-11233d549c9b-kube-api-access-7jm6z\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.870879 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.870977 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jm6z\" (UniqueName: \"kubernetes.io/projected/db459b41-b7ab-4982-8889-11233d549c9b-kube-api-access-7jm6z\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.871063 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.875329 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.875742 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.891004 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jm6z\" (UniqueName: \"kubernetes.io/projected/db459b41-b7ab-4982-8889-11233d549c9b-kube-api-access-7jm6z\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:05 crc kubenswrapper[4799]: I0216 13:04:05.964645 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:06 crc kubenswrapper[4799]: I0216 13:04:06.591724 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5"] Feb 16 13:04:07 crc kubenswrapper[4799]: I0216 13:04:07.571144 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" event={"ID":"db459b41-b7ab-4982-8889-11233d549c9b","Type":"ContainerStarted","Data":"b9148682a5f618e5679a1dd24fe7982ab08bb44194a55bc2b771a26f75596005"} Feb 16 13:04:07 crc kubenswrapper[4799]: I0216 13:04:07.571726 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" event={"ID":"db459b41-b7ab-4982-8889-11233d549c9b","Type":"ContainerStarted","Data":"c6f874d5c9192bdc237a7b6f3bf93a81e7c47a14b5188b90fa22b8620bab7bb2"} Feb 16 13:04:08 crc kubenswrapper[4799]: I0216 13:04:08.149373 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:04:08 crc kubenswrapper[4799]: E0216 13:04:08.149835 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:04:13 crc kubenswrapper[4799]: I0216 13:04:13.035079 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" podStartSLOduration=7.62072496 podStartE2EDuration="8.03505774s" podCreationTimestamp="2026-02-16 13:04:05 +0000 UTC" firstStartedPulling="2026-02-16 13:04:06.591355717 +0000 UTC m=+1952.184371051" lastFinishedPulling="2026-02-16 13:04:07.005688497 +0000 UTC m=+1952.598703831" observedRunningTime="2026-02-16 13:04:07.597154437 +0000 UTC m=+1953.190169771" watchObservedRunningTime="2026-02-16 13:04:13.03505774 +0000 UTC m=+1958.628073084" Feb 16 13:04:13 crc kubenswrapper[4799]: I0216 13:04:13.043875 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-swpq5"] Feb 16 13:04:13 crc kubenswrapper[4799]: I0216 13:04:13.054349 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-swpq5"] Feb 16 13:04:13 crc kubenswrapper[4799]: I0216 13:04:13.170238 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e37ea2-a3b0-43c1-94d4-c545edaed454" path="/var/lib/kubelet/pods/26e37ea2-a3b0-43c1-94d4-c545edaed454/volumes" Feb 16 13:04:15 crc kubenswrapper[4799]: I0216 13:04:15.030784 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqjts"] Feb 16 13:04:15 crc kubenswrapper[4799]: I0216 13:04:15.039468 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqjts"] Feb 16 13:04:15 crc kubenswrapper[4799]: I0216 13:04:15.163987 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37caa4cf-2608-483b-a75d-eb94ae2d41f5" path="/var/lib/kubelet/pods/37caa4cf-2608-483b-a75d-eb94ae2d41f5/volumes" Feb 16 13:04:19 crc kubenswrapper[4799]: I0216 13:04:19.150114 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:04:19 crc kubenswrapper[4799]: E0216 13:04:19.150934 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:04:34 crc kubenswrapper[4799]: I0216 13:04:34.150264 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:04:34 crc kubenswrapper[4799]: E0216 13:04:34.150937 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:04:43 crc kubenswrapper[4799]: I0216 13:04:43.011915 4799 scope.go:117] "RemoveContainer" containerID="8b75db812a15f28ecefb79692e06e0d8edec74be2fce361e62c0f7079d9581e3" Feb 16 13:04:43 crc kubenswrapper[4799]: I0216 13:04:43.050284 4799 scope.go:117] "RemoveContainer" containerID="93903097425ced611a882b483272217a0be1281f1517258e6ec2023ff409e261" Feb 16 13:04:43 crc kubenswrapper[4799]: I0216 13:04:43.104705 4799 scope.go:117] "RemoveContainer" containerID="91e98d674aebef321eba251d510b930aafdb3f18c0abb8d9dd6e2ca492bb134d" Feb 16 13:04:45 crc kubenswrapper[4799]: I0216 13:04:45.156672 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:04:45 crc kubenswrapper[4799]: E0216 13:04:45.157289 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:04:57 crc kubenswrapper[4799]: I0216 13:04:57.001480 4799 generic.go:334] "Generic (PLEG): container finished" podID="db459b41-b7ab-4982-8889-11233d549c9b" containerID="b9148682a5f618e5679a1dd24fe7982ab08bb44194a55bc2b771a26f75596005" exitCode=0 Feb 16 13:04:57 crc kubenswrapper[4799]: I0216 13:04:57.001636 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" event={"ID":"db459b41-b7ab-4982-8889-11233d549c9b","Type":"ContainerDied","Data":"b9148682a5f618e5679a1dd24fe7982ab08bb44194a55bc2b771a26f75596005"} Feb 16 13:04:57 crc kubenswrapper[4799]: I0216 13:04:57.149913 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.012879 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"1faa0f6dc2243e4711410dc1041f8d75eb757e3e7a9756791421eafb48ea14d3"} Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.506883 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.656245 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-inventory\") pod \"db459b41-b7ab-4982-8889-11233d549c9b\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.656450 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jm6z\" (UniqueName: \"kubernetes.io/projected/db459b41-b7ab-4982-8889-11233d549c9b-kube-api-access-7jm6z\") pod \"db459b41-b7ab-4982-8889-11233d549c9b\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.656558 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-ssh-key-openstack-edpm-ipam\") pod \"db459b41-b7ab-4982-8889-11233d549c9b\" (UID: \"db459b41-b7ab-4982-8889-11233d549c9b\") " Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.662306 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db459b41-b7ab-4982-8889-11233d549c9b-kube-api-access-7jm6z" (OuterVolumeSpecName: "kube-api-access-7jm6z") pod "db459b41-b7ab-4982-8889-11233d549c9b" (UID: "db459b41-b7ab-4982-8889-11233d549c9b"). InnerVolumeSpecName "kube-api-access-7jm6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.684873 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "db459b41-b7ab-4982-8889-11233d549c9b" (UID: "db459b41-b7ab-4982-8889-11233d549c9b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.686520 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-inventory" (OuterVolumeSpecName: "inventory") pod "db459b41-b7ab-4982-8889-11233d549c9b" (UID: "db459b41-b7ab-4982-8889-11233d549c9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.759293 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jm6z\" (UniqueName: \"kubernetes.io/projected/db459b41-b7ab-4982-8889-11233d549c9b-kube-api-access-7jm6z\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.759351 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:58 crc kubenswrapper[4799]: I0216 13:04:58.759364 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db459b41-b7ab-4982-8889-11233d549c9b-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.022386 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" event={"ID":"db459b41-b7ab-4982-8889-11233d549c9b","Type":"ContainerDied","Data":"c6f874d5c9192bdc237a7b6f3bf93a81e7c47a14b5188b90fa22b8620bab7bb2"} Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.022428 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f874d5c9192bdc237a7b6f3bf93a81e7c47a14b5188b90fa22b8620bab7bb2" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.022441 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.117139 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2kzk7"] Feb 16 13:04:59 crc kubenswrapper[4799]: E0216 13:04:59.117680 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db459b41-b7ab-4982-8889-11233d549c9b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.117709 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="db459b41-b7ab-4982-8889-11233d549c9b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.117970 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="db459b41-b7ab-4982-8889-11233d549c9b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.118939 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.122353 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.122634 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.122811 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.123178 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.132301 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2kzk7"] Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.270788 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.271245 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfj7\" (UniqueName: \"kubernetes.io/projected/b7657976-4772-4623-b14e-c9de2130efa5-kube-api-access-hjfj7\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.271409 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.373162 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfj7\" (UniqueName: \"kubernetes.io/projected/b7657976-4772-4623-b14e-c9de2130efa5-kube-api-access-hjfj7\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.373263 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.373363 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.378625 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.378672 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.391158 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfj7\" (UniqueName: \"kubernetes.io/projected/b7657976-4772-4623-b14e-c9de2130efa5-kube-api-access-hjfj7\") pod \"ssh-known-hosts-edpm-deployment-2kzk7\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.441938 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:04:59 crc kubenswrapper[4799]: I0216 13:04:59.968769 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2kzk7"] Feb 16 13:05:00 crc kubenswrapper[4799]: I0216 13:05:00.036099 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" event={"ID":"b7657976-4772-4623-b14e-c9de2130efa5","Type":"ContainerStarted","Data":"ef8bd005fdb9e5993fd670276b28e42d66485dda58f0df8e7095b548cd3c49bf"} Feb 16 13:05:00 crc kubenswrapper[4799]: I0216 13:05:00.047472 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-69hl6"] Feb 16 13:05:00 crc kubenswrapper[4799]: I0216 13:05:00.055266 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-69hl6"] Feb 16 13:05:01 crc kubenswrapper[4799]: I0216 13:05:01.047390 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" event={"ID":"b7657976-4772-4623-b14e-c9de2130efa5","Type":"ContainerStarted","Data":"f58b964f14036260d319476a27c7c870ecfd2252acab0ead21fc83a211119d7d"} Feb 16 13:05:01 crc kubenswrapper[4799]: I0216 13:05:01.070300 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" podStartSLOduration=1.317084204 podStartE2EDuration="2.070275156s" podCreationTimestamp="2026-02-16 13:04:59 +0000 UTC" firstStartedPulling="2026-02-16 13:04:59.975900915 +0000 UTC m=+2005.568916249" lastFinishedPulling="2026-02-16 13:05:00.729091847 +0000 UTC m=+2006.322107201" observedRunningTime="2026-02-16 13:05:01.062882215 +0000 UTC m=+2006.655897569" watchObservedRunningTime="2026-02-16 13:05:01.070275156 +0000 UTC m=+2006.663290490" Feb 16 13:05:01 crc kubenswrapper[4799]: I0216 13:05:01.163159 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6474380-de01-4e68-bcea-caf2ce9bb2aa" path="/var/lib/kubelet/pods/e6474380-de01-4e68-bcea-caf2ce9bb2aa/volumes" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.471150 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wsvp7"] Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.474201 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.488630 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsvp7"] Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.648183 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-utilities\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.648278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-catalog-content\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.648980 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6srmz\" (UniqueName: \"kubernetes.io/projected/67fde883-ad36-423b-804e-2ba432908bce-kube-api-access-6srmz\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.751234 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-utilities\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.751331 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-catalog-content\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.751365 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6srmz\" (UniqueName: \"kubernetes.io/projected/67fde883-ad36-423b-804e-2ba432908bce-kube-api-access-6srmz\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.751721 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-utilities\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.751778 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-catalog-content\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.776904 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6srmz\" (UniqueName: \"kubernetes.io/projected/67fde883-ad36-423b-804e-2ba432908bce-kube-api-access-6srmz\") pod \"certified-operators-wsvp7\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:06 crc kubenswrapper[4799]: I0216 13:05:06.804767 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:07 crc kubenswrapper[4799]: I0216 13:05:07.355436 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsvp7"] Feb 16 13:05:08 crc kubenswrapper[4799]: I0216 13:05:08.117342 4799 generic.go:334] "Generic (PLEG): container finished" podID="67fde883-ad36-423b-804e-2ba432908bce" containerID="619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2" exitCode=0 Feb 16 13:05:08 crc kubenswrapper[4799]: I0216 13:05:08.117426 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerDied","Data":"619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2"} Feb 16 13:05:08 crc kubenswrapper[4799]: I0216 13:05:08.117793 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerStarted","Data":"08e10217866582823b4446bbc4dbe54e85fe6fbbc88de8ec8380bbcd77321a11"} Feb 16 13:05:08 crc kubenswrapper[4799]: I0216 13:05:08.121260 4799 generic.go:334] "Generic (PLEG): container finished" podID="b7657976-4772-4623-b14e-c9de2130efa5" containerID="f58b964f14036260d319476a27c7c870ecfd2252acab0ead21fc83a211119d7d" exitCode=0 Feb 16 13:05:08 crc kubenswrapper[4799]: I0216 13:05:08.121313 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" event={"ID":"b7657976-4772-4623-b14e-c9de2130efa5","Type":"ContainerDied","Data":"f58b964f14036260d319476a27c7c870ecfd2252acab0ead21fc83a211119d7d"} Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.131953 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerStarted","Data":"32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751"} Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.608404 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.717631 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-inventory-0\") pod \"b7657976-4772-4623-b14e-c9de2130efa5\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.717705 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-ssh-key-openstack-edpm-ipam\") pod \"b7657976-4772-4623-b14e-c9de2130efa5\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.717753 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjfj7\" (UniqueName: \"kubernetes.io/projected/b7657976-4772-4623-b14e-c9de2130efa5-kube-api-access-hjfj7\") pod \"b7657976-4772-4623-b14e-c9de2130efa5\" (UID: \"b7657976-4772-4623-b14e-c9de2130efa5\") " Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.724091 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7657976-4772-4623-b14e-c9de2130efa5-kube-api-access-hjfj7" (OuterVolumeSpecName: "kube-api-access-hjfj7") pod "b7657976-4772-4623-b14e-c9de2130efa5" (UID: "b7657976-4772-4623-b14e-c9de2130efa5"). InnerVolumeSpecName "kube-api-access-hjfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.746414 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b7657976-4772-4623-b14e-c9de2130efa5" (UID: "b7657976-4772-4623-b14e-c9de2130efa5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.748990 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b7657976-4772-4623-b14e-c9de2130efa5" (UID: "b7657976-4772-4623-b14e-c9de2130efa5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.820732 4799 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.820770 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7657976-4772-4623-b14e-c9de2130efa5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:09 crc kubenswrapper[4799]: I0216 13:05:09.820780 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjfj7\" (UniqueName: \"kubernetes.io/projected/b7657976-4772-4623-b14e-c9de2130efa5-kube-api-access-hjfj7\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.146188 4799 generic.go:334] "Generic (PLEG): container finished" podID="67fde883-ad36-423b-804e-2ba432908bce" containerID="32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751" exitCode=0 Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.146338 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerDied","Data":"32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751"} Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.148379 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" event={"ID":"b7657976-4772-4623-b14e-c9de2130efa5","Type":"ContainerDied","Data":"ef8bd005fdb9e5993fd670276b28e42d66485dda58f0df8e7095b548cd3c49bf"} Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.148414 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef8bd005fdb9e5993fd670276b28e42d66485dda58f0df8e7095b548cd3c49bf" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.148463 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2kzk7" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.251264 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk"] Feb 16 13:05:10 crc kubenswrapper[4799]: E0216 13:05:10.251873 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7657976-4772-4623-b14e-c9de2130efa5" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.251902 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7657976-4772-4623-b14e-c9de2130efa5" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.252240 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7657976-4772-4623-b14e-c9de2130efa5" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.253079 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk"] Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.253221 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.256303 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.256773 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.257147 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.257338 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.431871 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.431981 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.432153 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln56z\" (UniqueName: \"kubernetes.io/projected/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-kube-api-access-ln56z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.534059 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.534245 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln56z\" (UniqueName: \"kubernetes.io/projected/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-kube-api-access-ln56z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.534317 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.540833 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.545680 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.553715 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln56z\" (UniqueName: \"kubernetes.io/projected/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-kube-api-access-ln56z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bgvxk\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:10 crc kubenswrapper[4799]: I0216 13:05:10.579209 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:11 crc kubenswrapper[4799]: I0216 13:05:11.146016 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk"] Feb 16 13:05:11 crc kubenswrapper[4799]: I0216 13:05:11.164525 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" event={"ID":"bfb29f60-f76e-40d0-b672-ae1be3eb5c84","Type":"ContainerStarted","Data":"975feefb65410dec3747c5f3f39c581882e2369454ec0f9fe48dfc7957d34695"} Feb 16 13:05:11 crc kubenswrapper[4799]: I0216 13:05:11.166593 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerStarted","Data":"e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394"} Feb 16 13:05:11 crc kubenswrapper[4799]: I0216 13:05:11.184997 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wsvp7" podStartSLOduration=2.676065946 podStartE2EDuration="5.184980146s" podCreationTimestamp="2026-02-16 13:05:06 +0000 UTC" firstStartedPulling="2026-02-16 13:05:08.120387524 +0000 UTC m=+2013.713402868" lastFinishedPulling="2026-02-16 13:05:10.629301734 +0000 UTC m=+2016.222317068" observedRunningTime="2026-02-16 13:05:11.183853964 +0000 UTC m=+2016.776869298" watchObservedRunningTime="2026-02-16 13:05:11.184980146 +0000 UTC m=+2016.777995480" Feb 16 13:05:12 crc kubenswrapper[4799]: I0216 13:05:12.179759 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" event={"ID":"bfb29f60-f76e-40d0-b672-ae1be3eb5c84","Type":"ContainerStarted","Data":"3d9280d94b761a49941a5b379f90b2c4b88bdc5f338ed30d2f90b54a59ea5c11"} Feb 16 13:05:12 crc kubenswrapper[4799]: I0216 13:05:12.204520 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" podStartSLOduration=1.794658375 podStartE2EDuration="2.204500187s" podCreationTimestamp="2026-02-16 13:05:10 +0000 UTC" firstStartedPulling="2026-02-16 13:05:11.141769467 +0000 UTC m=+2016.734784801" lastFinishedPulling="2026-02-16 13:05:11.551611129 +0000 UTC m=+2017.144626613" observedRunningTime="2026-02-16 13:05:12.203832408 +0000 UTC m=+2017.796847742" watchObservedRunningTime="2026-02-16 13:05:12.204500187 +0000 UTC m=+2017.797515521" Feb 16 13:05:16 crc kubenswrapper[4799]: I0216 13:05:16.805596 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:16 crc kubenswrapper[4799]: I0216 13:05:16.806374 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:16 crc kubenswrapper[4799]: I0216 13:05:16.853792 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:17 crc kubenswrapper[4799]: I0216 13:05:17.274441 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:17 crc kubenswrapper[4799]: I0216 13:05:17.327077 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsvp7"] Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.244611 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wsvp7" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="registry-server" containerID="cri-o://e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394" gracePeriod=2 Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.732459 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.841367 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-utilities\") pod \"67fde883-ad36-423b-804e-2ba432908bce\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.841513 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-catalog-content\") pod \"67fde883-ad36-423b-804e-2ba432908bce\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.841593 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6srmz\" (UniqueName: \"kubernetes.io/projected/67fde883-ad36-423b-804e-2ba432908bce-kube-api-access-6srmz\") pod \"67fde883-ad36-423b-804e-2ba432908bce\" (UID: \"67fde883-ad36-423b-804e-2ba432908bce\") " Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.843546 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-utilities" (OuterVolumeSpecName: "utilities") pod "67fde883-ad36-423b-804e-2ba432908bce" (UID: "67fde883-ad36-423b-804e-2ba432908bce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.849638 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fde883-ad36-423b-804e-2ba432908bce-kube-api-access-6srmz" (OuterVolumeSpecName: "kube-api-access-6srmz") pod "67fde883-ad36-423b-804e-2ba432908bce" (UID: "67fde883-ad36-423b-804e-2ba432908bce"). InnerVolumeSpecName "kube-api-access-6srmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.896955 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67fde883-ad36-423b-804e-2ba432908bce" (UID: "67fde883-ad36-423b-804e-2ba432908bce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.945207 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.945327 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67fde883-ad36-423b-804e-2ba432908bce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:19 crc kubenswrapper[4799]: I0216 13:05:19.945356 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6srmz\" (UniqueName: \"kubernetes.io/projected/67fde883-ad36-423b-804e-2ba432908bce-kube-api-access-6srmz\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.261534 4799 generic.go:334] "Generic (PLEG): container finished" podID="67fde883-ad36-423b-804e-2ba432908bce" containerID="e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394" exitCode=0 Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.261601 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerDied","Data":"e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394"} Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.261635 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsvp7" event={"ID":"67fde883-ad36-423b-804e-2ba432908bce","Type":"ContainerDied","Data":"08e10217866582823b4446bbc4dbe54e85fe6fbbc88de8ec8380bbcd77321a11"} Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.261656 4799 scope.go:117] "RemoveContainer" containerID="e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.261806 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsvp7" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.314303 4799 scope.go:117] "RemoveContainer" containerID="32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.316350 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsvp7"] Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.328577 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wsvp7"] Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.339654 4799 scope.go:117] "RemoveContainer" containerID="619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.383888 4799 scope.go:117] "RemoveContainer" containerID="e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394" Feb 16 13:05:20 crc kubenswrapper[4799]: E0216 13:05:20.384486 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394\": container with ID starting with e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394 not found: ID does not exist" containerID="e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.384517 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394"} err="failed to get container status \"e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394\": rpc error: code = NotFound desc = could not find container \"e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394\": container with ID starting with e0be60c6b4374c48fb9c0e400458ce8ccbfd0ee1eac4c268e6000b0dff6dc394 not found: ID does not exist" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.384541 4799 scope.go:117] "RemoveContainer" containerID="32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751" Feb 16 13:05:20 crc kubenswrapper[4799]: E0216 13:05:20.384871 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751\": container with ID starting with 32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751 not found: ID does not exist" containerID="32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.384903 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751"} err="failed to get container status \"32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751\": rpc error: code = NotFound desc = could not find container \"32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751\": container with ID starting with 32319ea6f902002c70a6592b8dd93c6fdfaa812ad464ada6885c77bf2fe95751 not found: ID does not exist" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.384921 4799 scope.go:117] "RemoveContainer" containerID="619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2" Feb 16 13:05:20 crc kubenswrapper[4799]: E0216 13:05:20.385471 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2\": container with ID starting with 619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2 not found: ID does not exist" containerID="619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2" Feb 16 13:05:20 crc kubenswrapper[4799]: I0216 13:05:20.385541 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2"} err="failed to get container status \"619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2\": rpc error: code = NotFound desc = could not find container \"619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2\": container with ID starting with 619d77c4ced0577fee53d63ac7338c375dc6cdaac2e40e5c015343fe1b4b4ed2 not found: ID does not exist" Feb 16 13:05:21 crc kubenswrapper[4799]: I0216 13:05:21.163156 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67fde883-ad36-423b-804e-2ba432908bce" path="/var/lib/kubelet/pods/67fde883-ad36-423b-804e-2ba432908bce/volumes" Feb 16 13:05:21 crc kubenswrapper[4799]: I0216 13:05:21.289605 4799 generic.go:334] "Generic (PLEG): container finished" podID="bfb29f60-f76e-40d0-b672-ae1be3eb5c84" containerID="3d9280d94b761a49941a5b379f90b2c4b88bdc5f338ed30d2f90b54a59ea5c11" exitCode=0 Feb 16 13:05:21 crc kubenswrapper[4799]: I0216 13:05:21.289697 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" event={"ID":"bfb29f60-f76e-40d0-b672-ae1be3eb5c84","Type":"ContainerDied","Data":"3d9280d94b761a49941a5b379f90b2c4b88bdc5f338ed30d2f90b54a59ea5c11"} Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.757044 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.907391 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-ssh-key-openstack-edpm-ipam\") pod \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.907508 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln56z\" (UniqueName: \"kubernetes.io/projected/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-kube-api-access-ln56z\") pod \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.907936 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-inventory\") pod \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\" (UID: \"bfb29f60-f76e-40d0-b672-ae1be3eb5c84\") " Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.913375 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-kube-api-access-ln56z" (OuterVolumeSpecName: "kube-api-access-ln56z") pod "bfb29f60-f76e-40d0-b672-ae1be3eb5c84" (UID: "bfb29f60-f76e-40d0-b672-ae1be3eb5c84"). InnerVolumeSpecName "kube-api-access-ln56z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.938353 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-inventory" (OuterVolumeSpecName: "inventory") pod "bfb29f60-f76e-40d0-b672-ae1be3eb5c84" (UID: "bfb29f60-f76e-40d0-b672-ae1be3eb5c84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:05:22 crc kubenswrapper[4799]: I0216 13:05:22.941506 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bfb29f60-f76e-40d0-b672-ae1be3eb5c84" (UID: "bfb29f60-f76e-40d0-b672-ae1be3eb5c84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.011293 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.011379 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.011398 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln56z\" (UniqueName: \"kubernetes.io/projected/bfb29f60-f76e-40d0-b672-ae1be3eb5c84-kube-api-access-ln56z\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.346353 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" event={"ID":"bfb29f60-f76e-40d0-b672-ae1be3eb5c84","Type":"ContainerDied","Data":"975feefb65410dec3747c5f3f39c581882e2369454ec0f9fe48dfc7957d34695"} Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.346432 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975feefb65410dec3747c5f3f39c581882e2369454ec0f9fe48dfc7957d34695" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.346539 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bgvxk" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399065 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558"] Feb 16 13:05:23 crc kubenswrapper[4799]: E0216 13:05:23.399602 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="registry-server" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399620 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="registry-server" Feb 16 13:05:23 crc kubenswrapper[4799]: E0216 13:05:23.399633 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb29f60-f76e-40d0-b672-ae1be3eb5c84" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399643 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb29f60-f76e-40d0-b672-ae1be3eb5c84" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:05:23 crc kubenswrapper[4799]: E0216 13:05:23.399664 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="extract-content" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399670 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="extract-content" Feb 16 13:05:23 crc kubenswrapper[4799]: E0216 13:05:23.399699 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="extract-utilities" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399712 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="extract-utilities" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399906 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fde883-ad36-423b-804e-2ba432908bce" containerName="registry-server" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.399934 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb29f60-f76e-40d0-b672-ae1be3eb5c84" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.400681 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.403856 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.404148 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.404657 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.405778 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.424960 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558"] Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.536677 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.537196 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqqxq\" (UniqueName: \"kubernetes.io/projected/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-kube-api-access-jqqxq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.537232 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.639599 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.639748 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqqxq\" (UniqueName: \"kubernetes.io/projected/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-kube-api-access-jqqxq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.639779 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.643536 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.643561 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.657350 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqqxq\" (UniqueName: \"kubernetes.io/projected/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-kube-api-access-jqqxq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2558\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:23 crc kubenswrapper[4799]: I0216 13:05:23.745257 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:24 crc kubenswrapper[4799]: I0216 13:05:24.299469 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558"] Feb 16 13:05:24 crc kubenswrapper[4799]: I0216 13:05:24.306945 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:05:24 crc kubenswrapper[4799]: I0216 13:05:24.360143 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" event={"ID":"cb5e39c0-c809-4971-a2ea-f2a01d9f4493","Type":"ContainerStarted","Data":"8db787ad4bc50283a41dedb9709b6f0e90505340688e1d579c0b549ce01054b8"} Feb 16 13:05:25 crc kubenswrapper[4799]: I0216 13:05:25.374562 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" event={"ID":"cb5e39c0-c809-4971-a2ea-f2a01d9f4493","Type":"ContainerStarted","Data":"12bae8f8cc3887a17c29bfa333349276795b72e1d0e2514f5af5bfee284c91bf"} Feb 16 13:05:25 crc kubenswrapper[4799]: I0216 13:05:25.396979 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" podStartSLOduration=1.705191679 podStartE2EDuration="2.396960873s" podCreationTimestamp="2026-02-16 13:05:23 +0000 UTC" firstStartedPulling="2026-02-16 13:05:24.306252967 +0000 UTC m=+2029.899268301" lastFinishedPulling="2026-02-16 13:05:24.998022161 +0000 UTC m=+2030.591037495" observedRunningTime="2026-02-16 13:05:25.394911114 +0000 UTC m=+2030.987926448" watchObservedRunningTime="2026-02-16 13:05:25.396960873 +0000 UTC m=+2030.989976207" Feb 16 13:05:34 crc kubenswrapper[4799]: I0216 13:05:34.463704 4799 generic.go:334] "Generic (PLEG): container finished" podID="cb5e39c0-c809-4971-a2ea-f2a01d9f4493" containerID="12bae8f8cc3887a17c29bfa333349276795b72e1d0e2514f5af5bfee284c91bf" exitCode=0 Feb 16 13:05:34 crc kubenswrapper[4799]: I0216 13:05:34.463787 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" event={"ID":"cb5e39c0-c809-4971-a2ea-f2a01d9f4493","Type":"ContainerDied","Data":"12bae8f8cc3887a17c29bfa333349276795b72e1d0e2514f5af5bfee284c91bf"} Feb 16 13:05:35 crc kubenswrapper[4799]: I0216 13:05:35.926698 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.118983 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-inventory\") pod \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.119180 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-ssh-key-openstack-edpm-ipam\") pod \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.119322 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqqxq\" (UniqueName: \"kubernetes.io/projected/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-kube-api-access-jqqxq\") pod \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\" (UID: \"cb5e39c0-c809-4971-a2ea-f2a01d9f4493\") " Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.124599 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-kube-api-access-jqqxq" (OuterVolumeSpecName: "kube-api-access-jqqxq") pod "cb5e39c0-c809-4971-a2ea-f2a01d9f4493" (UID: "cb5e39c0-c809-4971-a2ea-f2a01d9f4493"). InnerVolumeSpecName "kube-api-access-jqqxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.148733 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-inventory" (OuterVolumeSpecName: "inventory") pod "cb5e39c0-c809-4971-a2ea-f2a01d9f4493" (UID: "cb5e39c0-c809-4971-a2ea-f2a01d9f4493"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.150199 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb5e39c0-c809-4971-a2ea-f2a01d9f4493" (UID: "cb5e39c0-c809-4971-a2ea-f2a01d9f4493"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.222880 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.222917 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.222929 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqqxq\" (UniqueName: \"kubernetes.io/projected/cb5e39c0-c809-4971-a2ea-f2a01d9f4493-kube-api-access-jqqxq\") on node \"crc\" DevicePath \"\"" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.492699 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" event={"ID":"cb5e39c0-c809-4971-a2ea-f2a01d9f4493","Type":"ContainerDied","Data":"8db787ad4bc50283a41dedb9709b6f0e90505340688e1d579c0b549ce01054b8"} Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.493096 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8db787ad4bc50283a41dedb9709b6f0e90505340688e1d579c0b549ce01054b8" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.493198 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2558" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.670004 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67"] Feb 16 13:05:36 crc kubenswrapper[4799]: E0216 13:05:36.670461 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5e39c0-c809-4971-a2ea-f2a01d9f4493" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.670480 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5e39c0-c809-4971-a2ea-f2a01d9f4493" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.670696 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5e39c0-c809-4971-a2ea-f2a01d9f4493" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.671474 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.674627 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.674945 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.675251 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.675544 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.675906 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.676036 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.676171 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.676339 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732507 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732577 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732648 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732752 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732785 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732829 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732864 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.732989 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.733056 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.733091 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.733182 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.733426 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2gq\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-kube-api-access-ds2gq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.733494 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.756194 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67"] Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835044 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds2gq\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-kube-api-access-ds2gq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835096 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835167 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835199 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835222 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835249 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835276 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835294 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835329 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835357 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835375 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835396 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835415 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.835439 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.842038 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.842217 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.842529 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.842990 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.844779 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.845013 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.845269 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.846207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.846628 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.847982 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.848945 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.850444 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.853834 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds2gq\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-kube-api-access-ds2gq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:36 crc kubenswrapper[4799]: I0216 13:05:36.856834 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fk67\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:37 crc kubenswrapper[4799]: I0216 13:05:37.021489 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:05:37 crc kubenswrapper[4799]: I0216 13:05:37.580433 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67"] Feb 16 13:05:37 crc kubenswrapper[4799]: W0216 13:05:37.581460 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ad5bcca_c29e_4594_8698_4a139a80eb92.slice/crio-0adcaa933e1cdf56e492944b49aab1138f568476e089a1ad18d38488585a6db0 WatchSource:0}: Error finding container 0adcaa933e1cdf56e492944b49aab1138f568476e089a1ad18d38488585a6db0: Status 404 returned error can't find the container with id 0adcaa933e1cdf56e492944b49aab1138f568476e089a1ad18d38488585a6db0 Feb 16 13:05:38 crc kubenswrapper[4799]: I0216 13:05:38.511095 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" event={"ID":"6ad5bcca-c29e-4594-8698-4a139a80eb92","Type":"ContainerStarted","Data":"265ab2d153c684a3b68acd381204c484382d9dd804abdf8a3fd999430daf2ff3"} Feb 16 13:05:38 crc kubenswrapper[4799]: I0216 13:05:38.511470 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" event={"ID":"6ad5bcca-c29e-4594-8698-4a139a80eb92","Type":"ContainerStarted","Data":"0adcaa933e1cdf56e492944b49aab1138f568476e089a1ad18d38488585a6db0"} Feb 16 13:05:38 crc kubenswrapper[4799]: I0216 13:05:38.546543 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" podStartSLOduration=2.132268663 podStartE2EDuration="2.546518959s" podCreationTimestamp="2026-02-16 13:05:36 +0000 UTC" firstStartedPulling="2026-02-16 13:05:37.583900869 +0000 UTC m=+2043.176916203" lastFinishedPulling="2026-02-16 13:05:37.998151155 +0000 UTC m=+2043.591166499" observedRunningTime="2026-02-16 13:05:38.535288389 +0000 UTC m=+2044.128303733" watchObservedRunningTime="2026-02-16 13:05:38.546518959 +0000 UTC m=+2044.139534303" Feb 16 13:05:43 crc kubenswrapper[4799]: I0216 13:05:43.233805 4799 scope.go:117] "RemoveContainer" containerID="112cb8e3158f1bb81150d382e2563fda0af1811256c6fc0a501f0624d1bb6885" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.137983 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mwv94"] Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.142917 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.158701 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwv94"] Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.207418 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-catalog-content\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.207619 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwp6k\" (UniqueName: \"kubernetes.io/projected/1cb7f6df-08e1-4ece-81f9-05194879d60e-kube-api-access-gwp6k\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.207698 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-utilities\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.309359 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwp6k\" (UniqueName: \"kubernetes.io/projected/1cb7f6df-08e1-4ece-81f9-05194879d60e-kube-api-access-gwp6k\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.309490 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-utilities\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.309564 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-catalog-content\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.310205 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-utilities\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.310309 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-catalog-content\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.334181 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwp6k\" (UniqueName: \"kubernetes.io/projected/1cb7f6df-08e1-4ece-81f9-05194879d60e-kube-api-access-gwp6k\") pod \"redhat-operators-mwv94\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.474693 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:12 crc kubenswrapper[4799]: I0216 13:06:12.970170 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwv94"] Feb 16 13:06:13 crc kubenswrapper[4799]: I0216 13:06:13.818888 4799 generic.go:334] "Generic (PLEG): container finished" podID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerID="96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9" exitCode=0 Feb 16 13:06:13 crc kubenswrapper[4799]: I0216 13:06:13.818996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerDied","Data":"96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9"} Feb 16 13:06:13 crc kubenswrapper[4799]: I0216 13:06:13.819466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerStarted","Data":"4f83116d2e4ee7e3edb8360db4a869188443673f3ef0504f3ac8cd832274acb8"} Feb 16 13:06:14 crc kubenswrapper[4799]: I0216 13:06:14.830403 4799 generic.go:334] "Generic (PLEG): container finished" podID="6ad5bcca-c29e-4594-8698-4a139a80eb92" containerID="265ab2d153c684a3b68acd381204c484382d9dd804abdf8a3fd999430daf2ff3" exitCode=0 Feb 16 13:06:14 crc kubenswrapper[4799]: I0216 13:06:14.830503 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" event={"ID":"6ad5bcca-c29e-4594-8698-4a139a80eb92","Type":"ContainerDied","Data":"265ab2d153c684a3b68acd381204c484382d9dd804abdf8a3fd999430daf2ff3"} Feb 16 13:06:14 crc kubenswrapper[4799]: I0216 13:06:14.834430 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerStarted","Data":"5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8"} Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.424212 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612613 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-bootstrap-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612690 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-ovn-default-certs-0\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612721 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ssh-key-openstack-edpm-ipam\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612784 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-repo-setup-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612822 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612923 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-neutron-metadata-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.612977 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613028 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-nova-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613098 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613156 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ovn-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613232 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds2gq\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-kube-api-access-ds2gq\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613282 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-inventory\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613331 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-telemetry-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.613378 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-libvirt-combined-ca-bundle\") pod \"6ad5bcca-c29e-4594-8698-4a139a80eb92\" (UID: \"6ad5bcca-c29e-4594-8698-4a139a80eb92\") " Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.621491 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.621572 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.621767 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.621843 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.622087 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-kube-api-access-ds2gq" (OuterVolumeSpecName: "kube-api-access-ds2gq") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "kube-api-access-ds2gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.674937 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.676612 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.676634 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.697549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.697598 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.697570 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.697696 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.705792 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-inventory" (OuterVolumeSpecName: "inventory") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.708382 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ad5bcca-c29e-4594-8698-4a139a80eb92" (UID: "6ad5bcca-c29e-4594-8698-4a139a80eb92"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717727 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds2gq\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-kube-api-access-ds2gq\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717778 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717790 4799 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717811 4799 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717823 4799 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717835 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717848 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717860 4799 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717876 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717889 4799 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717900 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717914 4799 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717926 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6ad5bcca-c29e-4594-8698-4a139a80eb92-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.717938 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad5bcca-c29e-4594-8698-4a139a80eb92-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.876591 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" event={"ID":"6ad5bcca-c29e-4594-8698-4a139a80eb92","Type":"ContainerDied","Data":"0adcaa933e1cdf56e492944b49aab1138f568476e089a1ad18d38488585a6db0"} Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.876638 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0adcaa933e1cdf56e492944b49aab1138f568476e089a1ad18d38488585a6db0" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.876714 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fk67" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.879767 4799 generic.go:334] "Generic (PLEG): container finished" podID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerID="5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8" exitCode=0 Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.879811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerDied","Data":"5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8"} Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.981010 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx"] Feb 16 13:06:16 crc kubenswrapper[4799]: E0216 13:06:16.983477 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad5bcca-c29e-4594-8698-4a139a80eb92" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.983503 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad5bcca-c29e-4594-8698-4a139a80eb92" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.983703 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad5bcca-c29e-4594-8698-4a139a80eb92" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.984395 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.988736 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.988815 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.989084 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.989237 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:06:16 crc kubenswrapper[4799]: I0216 13:06:16.989358 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.021763 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx"] Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.126147 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.126328 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8s6x\" (UniqueName: \"kubernetes.io/projected/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-kube-api-access-l8s6x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.126395 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.126417 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.126446 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.227849 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8s6x\" (UniqueName: \"kubernetes.io/projected/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-kube-api-access-l8s6x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.227923 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.227944 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.227965 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.227998 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.229041 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.232117 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.232306 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.233223 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.262679 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8s6x\" (UniqueName: \"kubernetes.io/projected/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-kube-api-access-l8s6x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hpddx\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.307666 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.893524 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerStarted","Data":"e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c"} Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.921785 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mwv94" podStartSLOduration=2.460376363 podStartE2EDuration="5.921753676s" podCreationTimestamp="2026-02-16 13:06:12 +0000 UTC" firstStartedPulling="2026-02-16 13:06:13.823727508 +0000 UTC m=+2079.416742842" lastFinishedPulling="2026-02-16 13:06:17.285104811 +0000 UTC m=+2082.878120155" observedRunningTime="2026-02-16 13:06:17.914255873 +0000 UTC m=+2083.507271207" watchObservedRunningTime="2026-02-16 13:06:17.921753676 +0000 UTC m=+2083.514769030" Feb 16 13:06:17 crc kubenswrapper[4799]: W0216 13:06:17.937429 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3f7c5d7_95f5_4b8b_9a17_99c4a179064e.slice/crio-05710f6edb31cd29f21b97a9f162862fb29c1167ff73c4c409b1bff00d312ca6 WatchSource:0}: Error finding container 05710f6edb31cd29f21b97a9f162862fb29c1167ff73c4c409b1bff00d312ca6: Status 404 returned error can't find the container with id 05710f6edb31cd29f21b97a9f162862fb29c1167ff73c4c409b1bff00d312ca6 Feb 16 13:06:17 crc kubenswrapper[4799]: I0216 13:06:17.938707 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx"] Feb 16 13:06:18 crc kubenswrapper[4799]: I0216 13:06:18.905091 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" event={"ID":"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e","Type":"ContainerStarted","Data":"d74352c40fd9af9a1db770b526e9fb846fe5c6a34eedcc0dd1836203e89ae11e"} Feb 16 13:06:18 crc kubenswrapper[4799]: I0216 13:06:18.905248 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" event={"ID":"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e","Type":"ContainerStarted","Data":"05710f6edb31cd29f21b97a9f162862fb29c1167ff73c4c409b1bff00d312ca6"} Feb 16 13:06:22 crc kubenswrapper[4799]: I0216 13:06:22.475380 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:22 crc kubenswrapper[4799]: I0216 13:06:22.476742 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:23 crc kubenswrapper[4799]: I0216 13:06:23.525802 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mwv94" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="registry-server" probeResult="failure" output=< Feb 16 13:06:23 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:06:23 crc kubenswrapper[4799]: > Feb 16 13:06:32 crc kubenswrapper[4799]: I0216 13:06:32.525366 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:32 crc kubenswrapper[4799]: I0216 13:06:32.562425 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" podStartSLOduration=16.075803845 podStartE2EDuration="16.562401411s" podCreationTimestamp="2026-02-16 13:06:16 +0000 UTC" firstStartedPulling="2026-02-16 13:06:17.940608953 +0000 UTC m=+2083.533624287" lastFinishedPulling="2026-02-16 13:06:18.427206529 +0000 UTC m=+2084.020221853" observedRunningTime="2026-02-16 13:06:18.936179682 +0000 UTC m=+2084.529195016" watchObservedRunningTime="2026-02-16 13:06:32.562401411 +0000 UTC m=+2098.155416745" Feb 16 13:06:32 crc kubenswrapper[4799]: I0216 13:06:32.581870 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:32 crc kubenswrapper[4799]: I0216 13:06:32.768006 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mwv94"] Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.052681 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mwv94" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="registry-server" containerID="cri-o://e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c" gracePeriod=2 Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.537134 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.631260 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-utilities\") pod \"1cb7f6df-08e1-4ece-81f9-05194879d60e\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.631364 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwp6k\" (UniqueName: \"kubernetes.io/projected/1cb7f6df-08e1-4ece-81f9-05194879d60e-kube-api-access-gwp6k\") pod \"1cb7f6df-08e1-4ece-81f9-05194879d60e\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.631513 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-catalog-content\") pod \"1cb7f6df-08e1-4ece-81f9-05194879d60e\" (UID: \"1cb7f6df-08e1-4ece-81f9-05194879d60e\") " Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.632272 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-utilities" (OuterVolumeSpecName: "utilities") pod "1cb7f6df-08e1-4ece-81f9-05194879d60e" (UID: "1cb7f6df-08e1-4ece-81f9-05194879d60e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.640714 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb7f6df-08e1-4ece-81f9-05194879d60e-kube-api-access-gwp6k" (OuterVolumeSpecName: "kube-api-access-gwp6k") pod "1cb7f6df-08e1-4ece-81f9-05194879d60e" (UID: "1cb7f6df-08e1-4ece-81f9-05194879d60e"). InnerVolumeSpecName "kube-api-access-gwp6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.734676 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.734722 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwp6k\" (UniqueName: \"kubernetes.io/projected/1cb7f6df-08e1-4ece-81f9-05194879d60e-kube-api-access-gwp6k\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.773005 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cb7f6df-08e1-4ece-81f9-05194879d60e" (UID: "1cb7f6df-08e1-4ece-81f9-05194879d60e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:06:34 crc kubenswrapper[4799]: I0216 13:06:34.836966 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb7f6df-08e1-4ece-81f9-05194879d60e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.064272 4799 generic.go:334] "Generic (PLEG): container finished" podID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerID="e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c" exitCode=0 Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.064338 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerDied","Data":"e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c"} Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.064362 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwv94" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.064396 4799 scope.go:117] "RemoveContainer" containerID="e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.064382 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwv94" event={"ID":"1cb7f6df-08e1-4ece-81f9-05194879d60e","Type":"ContainerDied","Data":"4f83116d2e4ee7e3edb8360db4a869188443673f3ef0504f3ac8cd832274acb8"} Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.087064 4799 scope.go:117] "RemoveContainer" containerID="5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.111002 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mwv94"] Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.121985 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mwv94"] Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.134980 4799 scope.go:117] "RemoveContainer" containerID="96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.172995 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" path="/var/lib/kubelet/pods/1cb7f6df-08e1-4ece-81f9-05194879d60e/volumes" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.177948 4799 scope.go:117] "RemoveContainer" containerID="e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c" Feb 16 13:06:35 crc kubenswrapper[4799]: E0216 13:06:35.178902 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c\": container with ID starting with e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c not found: ID does not exist" containerID="e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.179040 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c"} err="failed to get container status \"e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c\": rpc error: code = NotFound desc = could not find container \"e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c\": container with ID starting with e11b03bfceb96aecfebb5b9bcb8aea2f5f718b96a83a1bbd084aa9a814fb792c not found: ID does not exist" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.179098 4799 scope.go:117] "RemoveContainer" containerID="5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8" Feb 16 13:06:35 crc kubenswrapper[4799]: E0216 13:06:35.179702 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8\": container with ID starting with 5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8 not found: ID does not exist" containerID="5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.179748 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8"} err="failed to get container status \"5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8\": rpc error: code = NotFound desc = could not find container \"5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8\": container with ID starting with 5d715868795732b31e92f8c3004a5d324b78796e053fcb4aa2e669df2db12be8 not found: ID does not exist" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.179789 4799 scope.go:117] "RemoveContainer" containerID="96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9" Feb 16 13:06:35 crc kubenswrapper[4799]: E0216 13:06:35.180455 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9\": container with ID starting with 96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9 not found: ID does not exist" containerID="96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9" Feb 16 13:06:35 crc kubenswrapper[4799]: I0216 13:06:35.180510 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9"} err="failed to get container status \"96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9\": rpc error: code = NotFound desc = could not find container \"96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9\": container with ID starting with 96f9091d16dc99892134eb197ff8440a50a527f06708ac662c79ca82efc3dbd9 not found: ID does not exist" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.927293 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7tlg6"] Feb 16 13:07:00 crc kubenswrapper[4799]: E0216 13:07:00.928089 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="extract-utilities" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.928102 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="extract-utilities" Feb 16 13:07:00 crc kubenswrapper[4799]: E0216 13:07:00.928150 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="registry-server" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.928158 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="registry-server" Feb 16 13:07:00 crc kubenswrapper[4799]: E0216 13:07:00.928174 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="extract-content" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.928182 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="extract-content" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.928425 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb7f6df-08e1-4ece-81f9-05194879d60e" containerName="registry-server" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.930324 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:00 crc kubenswrapper[4799]: I0216 13:07:00.941962 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7tlg6"] Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.022987 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pss\" (UniqueName: \"kubernetes.io/projected/18b635c9-4dce-450b-80b9-f3ad488217d9-kube-api-access-w9pss\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.023519 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-catalog-content\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.023600 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-utilities\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.127077 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pss\" (UniqueName: \"kubernetes.io/projected/18b635c9-4dce-450b-80b9-f3ad488217d9-kube-api-access-w9pss\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.127589 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-catalog-content\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.127788 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-utilities\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.128326 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-catalog-content\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.128546 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-utilities\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.151229 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pss\" (UniqueName: \"kubernetes.io/projected/18b635c9-4dce-450b-80b9-f3ad488217d9-kube-api-access-w9pss\") pod \"community-operators-7tlg6\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.259609 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:01 crc kubenswrapper[4799]: I0216 13:07:01.902989 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7tlg6"] Feb 16 13:07:02 crc kubenswrapper[4799]: I0216 13:07:02.366406 4799 generic.go:334] "Generic (PLEG): container finished" podID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerID="cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891" exitCode=0 Feb 16 13:07:02 crc kubenswrapper[4799]: I0216 13:07:02.366535 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tlg6" event={"ID":"18b635c9-4dce-450b-80b9-f3ad488217d9","Type":"ContainerDied","Data":"cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891"} Feb 16 13:07:02 crc kubenswrapper[4799]: I0216 13:07:02.366943 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tlg6" event={"ID":"18b635c9-4dce-450b-80b9-f3ad488217d9","Type":"ContainerStarted","Data":"f3b7cf6427b9771e521a0cdbb59d553c1b0af289d8b109c9f79f9391a20f67e8"} Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.734039 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4zgg5"] Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.736500 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.746297 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zgg5"] Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.810004 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-catalog-content\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.810098 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-utilities\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.810183 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9lkw\" (UniqueName: \"kubernetes.io/projected/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-kube-api-access-b9lkw\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.913180 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-catalog-content\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.913309 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-utilities\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.913377 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9lkw\" (UniqueName: \"kubernetes.io/projected/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-kube-api-access-b9lkw\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.913853 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-catalog-content\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.913951 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-utilities\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:03 crc kubenswrapper[4799]: I0216 13:07:03.938040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9lkw\" (UniqueName: \"kubernetes.io/projected/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-kube-api-access-b9lkw\") pod \"redhat-marketplace-4zgg5\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:04 crc kubenswrapper[4799]: I0216 13:07:04.062084 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:04 crc kubenswrapper[4799]: I0216 13:07:04.396497 4799 generic.go:334] "Generic (PLEG): container finished" podID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerID="127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520" exitCode=0 Feb 16 13:07:04 crc kubenswrapper[4799]: I0216 13:07:04.396954 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tlg6" event={"ID":"18b635c9-4dce-450b-80b9-f3ad488217d9","Type":"ContainerDied","Data":"127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520"} Feb 16 13:07:04 crc kubenswrapper[4799]: I0216 13:07:04.571226 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zgg5"] Feb 16 13:07:05 crc kubenswrapper[4799]: I0216 13:07:05.408474 4799 generic.go:334] "Generic (PLEG): container finished" podID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerID="4f6e7b0b0243425e522031e4d87590c6a571520af20d0f822551731d092705cf" exitCode=0 Feb 16 13:07:05 crc kubenswrapper[4799]: I0216 13:07:05.408553 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zgg5" event={"ID":"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f","Type":"ContainerDied","Data":"4f6e7b0b0243425e522031e4d87590c6a571520af20d0f822551731d092705cf"} Feb 16 13:07:05 crc kubenswrapper[4799]: I0216 13:07:05.408843 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zgg5" event={"ID":"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f","Type":"ContainerStarted","Data":"2d126eda9064656f38102395b3ad3b026ecf9a95c4b609cf04124e5a31a9d816"} Feb 16 13:07:05 crc kubenswrapper[4799]: I0216 13:07:05.414910 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tlg6" event={"ID":"18b635c9-4dce-450b-80b9-f3ad488217d9","Type":"ContainerStarted","Data":"c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881"} Feb 16 13:07:05 crc kubenswrapper[4799]: I0216 13:07:05.473428 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7tlg6" podStartSLOduration=3.04636936 podStartE2EDuration="5.47340081s" podCreationTimestamp="2026-02-16 13:07:00 +0000 UTC" firstStartedPulling="2026-02-16 13:07:02.36842779 +0000 UTC m=+2127.961443124" lastFinishedPulling="2026-02-16 13:07:04.79545923 +0000 UTC m=+2130.388474574" observedRunningTime="2026-02-16 13:07:05.460517434 +0000 UTC m=+2131.053532768" watchObservedRunningTime="2026-02-16 13:07:05.47340081 +0000 UTC m=+2131.066416154" Feb 16 13:07:06 crc kubenswrapper[4799]: I0216 13:07:06.440904 4799 generic.go:334] "Generic (PLEG): container finished" podID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerID="b6237344c11b70e6db804ab0378d563ae6f928c651dd7552e63ff6f2ebaa8635" exitCode=0 Feb 16 13:07:06 crc kubenswrapper[4799]: I0216 13:07:06.441208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zgg5" event={"ID":"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f","Type":"ContainerDied","Data":"b6237344c11b70e6db804ab0378d563ae6f928c651dd7552e63ff6f2ebaa8635"} Feb 16 13:07:07 crc kubenswrapper[4799]: I0216 13:07:07.456488 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zgg5" event={"ID":"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f","Type":"ContainerStarted","Data":"10a1b9cba6829f8cb34a2b7f1f3ccc47560aaefc508c621fb1d476088a2d99e0"} Feb 16 13:07:07 crc kubenswrapper[4799]: I0216 13:07:07.486694 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4zgg5" podStartSLOduration=2.9047323929999997 podStartE2EDuration="4.486672657s" podCreationTimestamp="2026-02-16 13:07:03 +0000 UTC" firstStartedPulling="2026-02-16 13:07:05.410955574 +0000 UTC m=+2131.003970908" lastFinishedPulling="2026-02-16 13:07:06.992895838 +0000 UTC m=+2132.585911172" observedRunningTime="2026-02-16 13:07:07.477456925 +0000 UTC m=+2133.070472259" watchObservedRunningTime="2026-02-16 13:07:07.486672657 +0000 UTC m=+2133.079687991" Feb 16 13:07:11 crc kubenswrapper[4799]: I0216 13:07:11.261232 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:11 crc kubenswrapper[4799]: I0216 13:07:11.262838 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:11 crc kubenswrapper[4799]: I0216 13:07:11.318589 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:11 crc kubenswrapper[4799]: I0216 13:07:11.563401 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:12 crc kubenswrapper[4799]: I0216 13:07:12.907966 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7tlg6"] Feb 16 13:07:13 crc kubenswrapper[4799]: I0216 13:07:13.521039 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7tlg6" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="registry-server" containerID="cri-o://c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881" gracePeriod=2 Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.030684 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.062575 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.062638 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.064884 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-utilities\") pod \"18b635c9-4dce-450b-80b9-f3ad488217d9\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.064986 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-catalog-content\") pod \"18b635c9-4dce-450b-80b9-f3ad488217d9\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.065219 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pss\" (UniqueName: \"kubernetes.io/projected/18b635c9-4dce-450b-80b9-f3ad488217d9-kube-api-access-w9pss\") pod \"18b635c9-4dce-450b-80b9-f3ad488217d9\" (UID: \"18b635c9-4dce-450b-80b9-f3ad488217d9\") " Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.065944 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-utilities" (OuterVolumeSpecName: "utilities") pod "18b635c9-4dce-450b-80b9-f3ad488217d9" (UID: "18b635c9-4dce-450b-80b9-f3ad488217d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.078409 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b635c9-4dce-450b-80b9-f3ad488217d9-kube-api-access-w9pss" (OuterVolumeSpecName: "kube-api-access-w9pss") pod "18b635c9-4dce-450b-80b9-f3ad488217d9" (UID: "18b635c9-4dce-450b-80b9-f3ad488217d9"). InnerVolumeSpecName "kube-api-access-w9pss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.132769 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.154512 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18b635c9-4dce-450b-80b9-f3ad488217d9" (UID: "18b635c9-4dce-450b-80b9-f3ad488217d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.168258 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.168292 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18b635c9-4dce-450b-80b9-f3ad488217d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.168305 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pss\" (UniqueName: \"kubernetes.io/projected/18b635c9-4dce-450b-80b9-f3ad488217d9-kube-api-access-w9pss\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.532247 4799 generic.go:334] "Generic (PLEG): container finished" podID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerID="c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881" exitCode=0 Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.532334 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tlg6" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.532341 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tlg6" event={"ID":"18b635c9-4dce-450b-80b9-f3ad488217d9","Type":"ContainerDied","Data":"c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881"} Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.532715 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tlg6" event={"ID":"18b635c9-4dce-450b-80b9-f3ad488217d9","Type":"ContainerDied","Data":"f3b7cf6427b9771e521a0cdbb59d553c1b0af289d8b109c9f79f9391a20f67e8"} Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.532756 4799 scope.go:117] "RemoveContainer" containerID="c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.561288 4799 scope.go:117] "RemoveContainer" containerID="127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.571874 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7tlg6"] Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.581863 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7tlg6"] Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.594369 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.608580 4799 scope.go:117] "RemoveContainer" containerID="cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.661386 4799 scope.go:117] "RemoveContainer" containerID="c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881" Feb 16 13:07:14 crc kubenswrapper[4799]: E0216 13:07:14.674309 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881\": container with ID starting with c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881 not found: ID does not exist" containerID="c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.674368 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881"} err="failed to get container status \"c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881\": rpc error: code = NotFound desc = could not find container \"c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881\": container with ID starting with c3a809a41aeaff567ba3563d362d8ed107a53f14b8d02cd0e803727300e1a881 not found: ID does not exist" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.674405 4799 scope.go:117] "RemoveContainer" containerID="127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520" Feb 16 13:07:14 crc kubenswrapper[4799]: E0216 13:07:14.675154 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520\": container with ID starting with 127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520 not found: ID does not exist" containerID="127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.675209 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520"} err="failed to get container status \"127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520\": rpc error: code = NotFound desc = could not find container \"127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520\": container with ID starting with 127be9bfe25e2614e043a8d6ba83360b784043a64b6e271f0add799ab2b34520 not found: ID does not exist" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.675249 4799 scope.go:117] "RemoveContainer" containerID="cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891" Feb 16 13:07:14 crc kubenswrapper[4799]: E0216 13:07:14.675727 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891\": container with ID starting with cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891 not found: ID does not exist" containerID="cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891" Feb 16 13:07:14 crc kubenswrapper[4799]: I0216 13:07:14.675760 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891"} err="failed to get container status \"cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891\": rpc error: code = NotFound desc = could not find container \"cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891\": container with ID starting with cf3c9a620c39d93003356328c1a2d47214d146bd703d6d685ed3b80e38e7c891 not found: ID does not exist" Feb 16 13:07:15 crc kubenswrapper[4799]: I0216 13:07:15.160617 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" path="/var/lib/kubelet/pods/18b635c9-4dce-450b-80b9-f3ad488217d9/volumes" Feb 16 13:07:16 crc kubenswrapper[4799]: I0216 13:07:16.510479 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zgg5"] Feb 16 13:07:16 crc kubenswrapper[4799]: I0216 13:07:16.552530 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4zgg5" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="registry-server" containerID="cri-o://10a1b9cba6829f8cb34a2b7f1f3ccc47560aaefc508c621fb1d476088a2d99e0" gracePeriod=2 Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.565999 4799 generic.go:334] "Generic (PLEG): container finished" podID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerID="10a1b9cba6829f8cb34a2b7f1f3ccc47560aaefc508c621fb1d476088a2d99e0" exitCode=0 Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.566059 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zgg5" event={"ID":"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f","Type":"ContainerDied","Data":"10a1b9cba6829f8cb34a2b7f1f3ccc47560aaefc508c621fb1d476088a2d99e0"} Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.566446 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zgg5" event={"ID":"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f","Type":"ContainerDied","Data":"2d126eda9064656f38102395b3ad3b026ecf9a95c4b609cf04124e5a31a9d816"} Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.566466 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d126eda9064656f38102395b3ad3b026ecf9a95c4b609cf04124e5a31a9d816" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.594118 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.653657 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9lkw\" (UniqueName: \"kubernetes.io/projected/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-kube-api-access-b9lkw\") pod \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.653805 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-utilities\") pod \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.653836 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-catalog-content\") pod \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\" (UID: \"c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f\") " Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.655266 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-utilities" (OuterVolumeSpecName: "utilities") pod "c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" (UID: "c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.661214 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-kube-api-access-b9lkw" (OuterVolumeSpecName: "kube-api-access-b9lkw") pod "c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" (UID: "c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f"). InnerVolumeSpecName "kube-api-access-b9lkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.690999 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" (UID: "c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.757030 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9lkw\" (UniqueName: \"kubernetes.io/projected/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-kube-api-access-b9lkw\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.757064 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:17 crc kubenswrapper[4799]: I0216 13:07:17.757076 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:18 crc kubenswrapper[4799]: I0216 13:07:18.576832 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zgg5" Feb 16 13:07:18 crc kubenswrapper[4799]: I0216 13:07:18.631984 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zgg5"] Feb 16 13:07:18 crc kubenswrapper[4799]: I0216 13:07:18.645690 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zgg5"] Feb 16 13:07:19 crc kubenswrapper[4799]: I0216 13:07:19.161729 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" path="/var/lib/kubelet/pods/c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f/volumes" Feb 16 13:07:21 crc kubenswrapper[4799]: I0216 13:07:21.606057 4799 generic.go:334] "Generic (PLEG): container finished" podID="e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" containerID="d74352c40fd9af9a1db770b526e9fb846fe5c6a34eedcc0dd1836203e89ae11e" exitCode=0 Feb 16 13:07:21 crc kubenswrapper[4799]: I0216 13:07:21.606116 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" event={"ID":"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e","Type":"ContainerDied","Data":"d74352c40fd9af9a1db770b526e9fb846fe5c6a34eedcc0dd1836203e89ae11e"} Feb 16 13:07:21 crc kubenswrapper[4799]: I0216 13:07:21.793234 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:07:21 crc kubenswrapper[4799]: I0216 13:07:21.793309 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.126027 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.174384 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ssh-key-openstack-edpm-ipam\") pod \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.174474 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovncontroller-config-0\") pod \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.174582 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8s6x\" (UniqueName: \"kubernetes.io/projected/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-kube-api-access-l8s6x\") pod \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.174657 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovn-combined-ca-bundle\") pod \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.174783 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory\") pod \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.186148 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" (UID: "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.197542 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-kube-api-access-l8s6x" (OuterVolumeSpecName: "kube-api-access-l8s6x") pod "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" (UID: "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e"). InnerVolumeSpecName "kube-api-access-l8s6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.210800 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" (UID: "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.214688 4799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory podName:e3f7c5d7-95f5-4b8b-9a17-99c4a179064e nodeName:}" failed. No retries permitted until 2026-02-16 13:07:23.714657973 +0000 UTC m=+2149.307673297 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory") pod "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" (UID: "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e") : error deleting /var/lib/kubelet/pods/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e/volume-subpaths: remove /var/lib/kubelet/pods/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e/volume-subpaths: no such file or directory Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.218778 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" (UID: "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.277883 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.277954 4799 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.277968 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8s6x\" (UniqueName: \"kubernetes.io/projected/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-kube-api-access-l8s6x\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.277980 4799 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.629311 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" event={"ID":"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e","Type":"ContainerDied","Data":"05710f6edb31cd29f21b97a9f162862fb29c1167ff73c4c409b1bff00d312ca6"} Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.629359 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05710f6edb31cd29f21b97a9f162862fb29c1167ff73c4c409b1bff00d312ca6" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.629406 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hpddx" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.789809 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory\") pod \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\" (UID: \"e3f7c5d7-95f5-4b8b-9a17-99c4a179064e\") " Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.796297 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory" (OuterVolumeSpecName: "inventory") pod "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" (UID: "e3f7c5d7-95f5-4b8b-9a17-99c4a179064e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797391 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh"] Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797834 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="registry-server" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797854 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="registry-server" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797874 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="extract-content" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797881 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="extract-content" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797894 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="extract-content" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797901 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="extract-content" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797915 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797921 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797935 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="extract-utilities" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797943 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="extract-utilities" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797959 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="extract-utilities" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797966 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="extract-utilities" Feb 16 13:07:23 crc kubenswrapper[4799]: E0216 13:07:23.797979 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="registry-server" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.797985 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="registry-server" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.798178 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f7c5d7-95f5-4b8b-9a17-99c4a179064e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.798196 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e9a341-5cd5-47b4-bc82-e4d2b83cc49f" containerName="registry-server" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.798206 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b635c9-4dce-450b-80b9-f3ad488217d9" containerName="registry-server" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.799032 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.801761 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.803810 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.810860 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh"] Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.892436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.892918 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.892988 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.893045 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.893092 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.893289 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xsq\" (UniqueName: \"kubernetes.io/projected/fbfe848b-c120-4ca7-993f-47c1e3902ed1-kube-api-access-p8xsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.893416 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f7c5d7-95f5-4b8b-9a17-99c4a179064e-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.995648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.995730 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.995781 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.995817 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.995862 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:23 crc kubenswrapper[4799]: I0216 13:07:23.996072 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8xsq\" (UniqueName: \"kubernetes.io/projected/fbfe848b-c120-4ca7-993f-47c1e3902ed1-kube-api-access-p8xsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.000483 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.000963 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.001496 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.001593 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.001902 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.016294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8xsq\" (UniqueName: \"kubernetes.io/projected/fbfe848b-c120-4ca7-993f-47c1e3902ed1-kube-api-access-p8xsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.170813 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:07:24 crc kubenswrapper[4799]: I0216 13:07:24.805115 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh"] Feb 16 13:07:25 crc kubenswrapper[4799]: I0216 13:07:25.684953 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" event={"ID":"fbfe848b-c120-4ca7-993f-47c1e3902ed1","Type":"ContainerStarted","Data":"e1ab3f85945fb541fda504cbd084a9c43a4d7289b68935eb147cf71af028e621"} Feb 16 13:07:25 crc kubenswrapper[4799]: I0216 13:07:25.685532 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" event={"ID":"fbfe848b-c120-4ca7-993f-47c1e3902ed1","Type":"ContainerStarted","Data":"61cfd77da81893d9847dec197c138f5cdd6931060dfcad48fe9c4d809903526e"} Feb 16 13:07:25 crc kubenswrapper[4799]: I0216 13:07:25.709675 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" podStartSLOduration=2.236646739 podStartE2EDuration="2.709657307s" podCreationTimestamp="2026-02-16 13:07:23 +0000 UTC" firstStartedPulling="2026-02-16 13:07:24.819753766 +0000 UTC m=+2150.412769100" lastFinishedPulling="2026-02-16 13:07:25.292764334 +0000 UTC m=+2150.885779668" observedRunningTime="2026-02-16 13:07:25.705645863 +0000 UTC m=+2151.298661197" watchObservedRunningTime="2026-02-16 13:07:25.709657307 +0000 UTC m=+2151.302672641" Feb 16 13:07:51 crc kubenswrapper[4799]: I0216 13:07:51.793362 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:07:51 crc kubenswrapper[4799]: I0216 13:07:51.793992 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:08:14 crc kubenswrapper[4799]: I0216 13:08:14.160996 4799 generic.go:334] "Generic (PLEG): container finished" podID="fbfe848b-c120-4ca7-993f-47c1e3902ed1" containerID="e1ab3f85945fb541fda504cbd084a9c43a4d7289b68935eb147cf71af028e621" exitCode=0 Feb 16 13:08:14 crc kubenswrapper[4799]: I0216 13:08:14.161086 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" event={"ID":"fbfe848b-c120-4ca7-993f-47c1e3902ed1","Type":"ContainerDied","Data":"e1ab3f85945fb541fda504cbd084a9c43a4d7289b68935eb147cf71af028e621"} Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.604853 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.766495 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.766655 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-inventory\") pod \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.766766 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-metadata-combined-ca-bundle\") pod \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.766967 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8xsq\" (UniqueName: \"kubernetes.io/projected/fbfe848b-c120-4ca7-993f-47c1e3902ed1-kube-api-access-p8xsq\") pod \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.767060 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-ssh-key-openstack-edpm-ipam\") pod \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.767196 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-nova-metadata-neutron-config-0\") pod \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\" (UID: \"fbfe848b-c120-4ca7-993f-47c1e3902ed1\") " Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.774651 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfe848b-c120-4ca7-993f-47c1e3902ed1-kube-api-access-p8xsq" (OuterVolumeSpecName: "kube-api-access-p8xsq") pod "fbfe848b-c120-4ca7-993f-47c1e3902ed1" (UID: "fbfe848b-c120-4ca7-993f-47c1e3902ed1"). InnerVolumeSpecName "kube-api-access-p8xsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.778533 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fbfe848b-c120-4ca7-993f-47c1e3902ed1" (UID: "fbfe848b-c120-4ca7-993f-47c1e3902ed1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.798527 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-inventory" (OuterVolumeSpecName: "inventory") pod "fbfe848b-c120-4ca7-993f-47c1e3902ed1" (UID: "fbfe848b-c120-4ca7-993f-47c1e3902ed1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.798579 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbfe848b-c120-4ca7-993f-47c1e3902ed1" (UID: "fbfe848b-c120-4ca7-993f-47c1e3902ed1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.809035 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "fbfe848b-c120-4ca7-993f-47c1e3902ed1" (UID: "fbfe848b-c120-4ca7-993f-47c1e3902ed1"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.812398 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "fbfe848b-c120-4ca7-993f-47c1e3902ed1" (UID: "fbfe848b-c120-4ca7-993f-47c1e3902ed1"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.870066 4799 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.870119 4799 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.870157 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.870171 4799 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.870373 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8xsq\" (UniqueName: \"kubernetes.io/projected/fbfe848b-c120-4ca7-993f-47c1e3902ed1-kube-api-access-p8xsq\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:15 crc kubenswrapper[4799]: I0216 13:08:15.870385 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfe848b-c120-4ca7-993f-47c1e3902ed1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.182627 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" event={"ID":"fbfe848b-c120-4ca7-993f-47c1e3902ed1","Type":"ContainerDied","Data":"61cfd77da81893d9847dec197c138f5cdd6931060dfcad48fe9c4d809903526e"} Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.182938 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61cfd77da81893d9847dec197c138f5cdd6931060dfcad48fe9c4d809903526e" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.182681 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.288111 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z"] Feb 16 13:08:16 crc kubenswrapper[4799]: E0216 13:08:16.288603 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfe848b-c120-4ca7-993f-47c1e3902ed1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.288627 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfe848b-c120-4ca7-993f-47c1e3902ed1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.288844 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfe848b-c120-4ca7-993f-47c1e3902ed1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.291331 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.293965 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.294216 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.294047 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.295466 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.295592 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.305269 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z"] Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.381900 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.381973 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.382099 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.382290 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.382458 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lz5\" (UniqueName: \"kubernetes.io/projected/c895c98f-f5b4-4f98-b498-fe07218cad2f-kube-api-access-22lz5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.484167 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.484304 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.484336 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.484360 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lz5\" (UniqueName: \"kubernetes.io/projected/c895c98f-f5b4-4f98-b498-fe07218cad2f-kube-api-access-22lz5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.484927 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.490042 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.493829 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.495718 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.495993 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.507000 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lz5\" (UniqueName: \"kubernetes.io/projected/c895c98f-f5b4-4f98-b498-fe07218cad2f-kube-api-access-22lz5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:16 crc kubenswrapper[4799]: I0216 13:08:16.619399 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:08:17 crc kubenswrapper[4799]: I0216 13:08:17.332331 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z"] Feb 16 13:08:18 crc kubenswrapper[4799]: I0216 13:08:18.221581 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" event={"ID":"c895c98f-f5b4-4f98-b498-fe07218cad2f","Type":"ContainerStarted","Data":"5b145bf6f4d18f40d91e51c6f1fffb3e2f1a48b115f42c7d765c1203965e3b19"} Feb 16 13:08:18 crc kubenswrapper[4799]: I0216 13:08:18.221902 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" event={"ID":"c895c98f-f5b4-4f98-b498-fe07218cad2f","Type":"ContainerStarted","Data":"53ce9e2bcb295905fc97655007e24873d9144e9da1d73a0c9463c12f9f65050e"} Feb 16 13:08:18 crc kubenswrapper[4799]: I0216 13:08:18.247373 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" podStartSLOduration=1.826055413 podStartE2EDuration="2.24735251s" podCreationTimestamp="2026-02-16 13:08:16 +0000 UTC" firstStartedPulling="2026-02-16 13:08:17.318146868 +0000 UTC m=+2202.911162202" lastFinishedPulling="2026-02-16 13:08:17.739443965 +0000 UTC m=+2203.332459299" observedRunningTime="2026-02-16 13:08:18.24453105 +0000 UTC m=+2203.837546394" watchObservedRunningTime="2026-02-16 13:08:18.24735251 +0000 UTC m=+2203.840367844" Feb 16 13:08:21 crc kubenswrapper[4799]: I0216 13:08:21.792770 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:08:21 crc kubenswrapper[4799]: I0216 13:08:21.793346 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:08:21 crc kubenswrapper[4799]: I0216 13:08:21.793391 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:08:21 crc kubenswrapper[4799]: I0216 13:08:21.794172 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1faa0f6dc2243e4711410dc1041f8d75eb757e3e7a9756791421eafb48ea14d3"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:08:21 crc kubenswrapper[4799]: I0216 13:08:21.794222 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://1faa0f6dc2243e4711410dc1041f8d75eb757e3e7a9756791421eafb48ea14d3" gracePeriod=600 Feb 16 13:08:22 crc kubenswrapper[4799]: I0216 13:08:22.263584 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="1faa0f6dc2243e4711410dc1041f8d75eb757e3e7a9756791421eafb48ea14d3" exitCode=0 Feb 16 13:08:22 crc kubenswrapper[4799]: I0216 13:08:22.263664 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"1faa0f6dc2243e4711410dc1041f8d75eb757e3e7a9756791421eafb48ea14d3"} Feb 16 13:08:22 crc kubenswrapper[4799]: I0216 13:08:22.263889 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542"} Feb 16 13:08:22 crc kubenswrapper[4799]: I0216 13:08:22.263913 4799 scope.go:117] "RemoveContainer" containerID="44ebf0ac40d2a0bae856329c9695f65b49712f1e3095955263f60d845ce5bf15" Feb 16 13:10:51 crc kubenswrapper[4799]: I0216 13:10:51.792783 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:10:51 crc kubenswrapper[4799]: I0216 13:10:51.793446 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:21 crc kubenswrapper[4799]: I0216 13:11:21.793381 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:11:21 crc kubenswrapper[4799]: I0216 13:11:21.793840 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:51 crc kubenswrapper[4799]: I0216 13:11:51.793387 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:11:51 crc kubenswrapper[4799]: I0216 13:11:51.793896 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:51 crc kubenswrapper[4799]: I0216 13:11:51.793968 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:11:51 crc kubenswrapper[4799]: I0216 13:11:51.794726 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:11:51 crc kubenswrapper[4799]: I0216 13:11:51.794776 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" gracePeriod=600 Feb 16 13:11:51 crc kubenswrapper[4799]: E0216 13:11:51.922164 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:11:52 crc kubenswrapper[4799]: I0216 13:11:52.605462 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" exitCode=0 Feb 16 13:11:52 crc kubenswrapper[4799]: I0216 13:11:52.605524 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542"} Feb 16 13:11:52 crc kubenswrapper[4799]: I0216 13:11:52.605562 4799 scope.go:117] "RemoveContainer" containerID="1faa0f6dc2243e4711410dc1041f8d75eb757e3e7a9756791421eafb48ea14d3" Feb 16 13:11:52 crc kubenswrapper[4799]: I0216 13:11:52.607719 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:11:52 crc kubenswrapper[4799]: E0216 13:11:52.608188 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:12:04 crc kubenswrapper[4799]: I0216 13:12:04.149964 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:12:04 crc kubenswrapper[4799]: E0216 13:12:04.151309 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:12:07 crc kubenswrapper[4799]: I0216 13:12:07.751758 4799 generic.go:334] "Generic (PLEG): container finished" podID="c895c98f-f5b4-4f98-b498-fe07218cad2f" containerID="5b145bf6f4d18f40d91e51c6f1fffb3e2f1a48b115f42c7d765c1203965e3b19" exitCode=0 Feb 16 13:12:07 crc kubenswrapper[4799]: I0216 13:12:07.751854 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" event={"ID":"c895c98f-f5b4-4f98-b498-fe07218cad2f","Type":"ContainerDied","Data":"5b145bf6f4d18f40d91e51c6f1fffb3e2f1a48b115f42c7d765c1203965e3b19"} Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.194215 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.374287 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lz5\" (UniqueName: \"kubernetes.io/projected/c895c98f-f5b4-4f98-b498-fe07218cad2f-kube-api-access-22lz5\") pod \"c895c98f-f5b4-4f98-b498-fe07218cad2f\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.374350 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-inventory\") pod \"c895c98f-f5b4-4f98-b498-fe07218cad2f\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.374402 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-ssh-key-openstack-edpm-ipam\") pod \"c895c98f-f5b4-4f98-b498-fe07218cad2f\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.374508 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-combined-ca-bundle\") pod \"c895c98f-f5b4-4f98-b498-fe07218cad2f\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.374572 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-secret-0\") pod \"c895c98f-f5b4-4f98-b498-fe07218cad2f\" (UID: \"c895c98f-f5b4-4f98-b498-fe07218cad2f\") " Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.382342 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c895c98f-f5b4-4f98-b498-fe07218cad2f-kube-api-access-22lz5" (OuterVolumeSpecName: "kube-api-access-22lz5") pod "c895c98f-f5b4-4f98-b498-fe07218cad2f" (UID: "c895c98f-f5b4-4f98-b498-fe07218cad2f"). InnerVolumeSpecName "kube-api-access-22lz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.382448 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c895c98f-f5b4-4f98-b498-fe07218cad2f" (UID: "c895c98f-f5b4-4f98-b498-fe07218cad2f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.405375 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c895c98f-f5b4-4f98-b498-fe07218cad2f" (UID: "c895c98f-f5b4-4f98-b498-fe07218cad2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.407167 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-inventory" (OuterVolumeSpecName: "inventory") pod "c895c98f-f5b4-4f98-b498-fe07218cad2f" (UID: "c895c98f-f5b4-4f98-b498-fe07218cad2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.413609 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c895c98f-f5b4-4f98-b498-fe07218cad2f" (UID: "c895c98f-f5b4-4f98-b498-fe07218cad2f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.477024 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.477056 4799 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.477085 4799 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.477095 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lz5\" (UniqueName: \"kubernetes.io/projected/c895c98f-f5b4-4f98-b498-fe07218cad2f-kube-api-access-22lz5\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.477105 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c895c98f-f5b4-4f98-b498-fe07218cad2f-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.769347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" event={"ID":"c895c98f-f5b4-4f98-b498-fe07218cad2f","Type":"ContainerDied","Data":"53ce9e2bcb295905fc97655007e24873d9144e9da1d73a0c9463c12f9f65050e"} Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.769593 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ce9e2bcb295905fc97655007e24873d9144e9da1d73a0c9463c12f9f65050e" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.769395 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.872824 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d"] Feb 16 13:12:09 crc kubenswrapper[4799]: E0216 13:12:09.873270 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c895c98f-f5b4-4f98-b498-fe07218cad2f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.873285 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c895c98f-f5b4-4f98-b498-fe07218cad2f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.873483 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c895c98f-f5b4-4f98-b498-fe07218cad2f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.874186 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.876233 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.876257 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.877070 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.877348 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.877433 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.877493 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.883881 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.885012 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d"] Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.986631 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.986679 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.986722 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.986748 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.987438 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.987492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.987597 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnj9\" (UniqueName: \"kubernetes.io/projected/9ecaed67-149c-4202-b3c9-c186d68a4b9a-kube-api-access-8rnj9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.987690 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:09 crc kubenswrapper[4799]: I0216 13:12:09.987801 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089667 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089741 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089760 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089794 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089821 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089883 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089913 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.089940 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnj9\" (UniqueName: \"kubernetes.io/projected/9ecaed67-149c-4202-b3c9-c186d68a4b9a-kube-api-access-8rnj9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.090046 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.090840 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.094200 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.094499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.094711 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.095040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.095740 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.095816 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.097100 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.113919 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnj9\" (UniqueName: \"kubernetes.io/projected/9ecaed67-149c-4202-b3c9-c186d68a4b9a-kube-api-access-8rnj9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-zr78d\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.208056 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.736622 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d"] Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.739745 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:12:10 crc kubenswrapper[4799]: I0216 13:12:10.778656 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" event={"ID":"9ecaed67-149c-4202-b3c9-c186d68a4b9a","Type":"ContainerStarted","Data":"ab3ceae03c498a329ac5fe968353ced32901bd16d1144be37b0723331843cf19"} Feb 16 13:12:11 crc kubenswrapper[4799]: I0216 13:12:11.788399 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" event={"ID":"9ecaed67-149c-4202-b3c9-c186d68a4b9a","Type":"ContainerStarted","Data":"79802cb2b22ab3a46e8f2e7fb2043ccb96a403425feee6f906783602c5799e4a"} Feb 16 13:12:11 crc kubenswrapper[4799]: I0216 13:12:11.810761 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" podStartSLOduration=2.18518638 podStartE2EDuration="2.810741184s" podCreationTimestamp="2026-02-16 13:12:09 +0000 UTC" firstStartedPulling="2026-02-16 13:12:10.739392109 +0000 UTC m=+2436.332407453" lastFinishedPulling="2026-02-16 13:12:11.364946923 +0000 UTC m=+2436.957962257" observedRunningTime="2026-02-16 13:12:11.808118809 +0000 UTC m=+2437.401134143" watchObservedRunningTime="2026-02-16 13:12:11.810741184 +0000 UTC m=+2437.403756518" Feb 16 13:12:15 crc kubenswrapper[4799]: I0216 13:12:15.150613 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:12:15 crc kubenswrapper[4799]: E0216 13:12:15.151459 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:12:30 crc kubenswrapper[4799]: I0216 13:12:30.150533 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:12:30 crc kubenswrapper[4799]: E0216 13:12:30.152007 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:12:42 crc kubenswrapper[4799]: I0216 13:12:42.148959 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:12:42 crc kubenswrapper[4799]: E0216 13:12:42.149871 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:12:57 crc kubenswrapper[4799]: I0216 13:12:57.148921 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:12:57 crc kubenswrapper[4799]: E0216 13:12:57.149767 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:13:09 crc kubenswrapper[4799]: I0216 13:13:09.152796 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:13:09 crc kubenswrapper[4799]: E0216 13:13:09.153440 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:13:20 crc kubenswrapper[4799]: I0216 13:13:20.149471 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:13:20 crc kubenswrapper[4799]: E0216 13:13:20.150311 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:13:32 crc kubenswrapper[4799]: I0216 13:13:32.150218 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:13:32 crc kubenswrapper[4799]: E0216 13:13:32.150959 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:13:43 crc kubenswrapper[4799]: I0216 13:13:43.149287 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:13:43 crc kubenswrapper[4799]: E0216 13:13:43.150147 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:13:43 crc kubenswrapper[4799]: I0216 13:13:43.481325 4799 scope.go:117] "RemoveContainer" containerID="b6237344c11b70e6db804ab0378d563ae6f928c651dd7552e63ff6f2ebaa8635" Feb 16 13:13:43 crc kubenswrapper[4799]: I0216 13:13:43.513640 4799 scope.go:117] "RemoveContainer" containerID="4f6e7b0b0243425e522031e4d87590c6a571520af20d0f822551731d092705cf" Feb 16 13:13:43 crc kubenswrapper[4799]: I0216 13:13:43.579328 4799 scope.go:117] "RemoveContainer" containerID="10a1b9cba6829f8cb34a2b7f1f3ccc47560aaefc508c621fb1d476088a2d99e0" Feb 16 13:13:54 crc kubenswrapper[4799]: I0216 13:13:54.149439 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:13:54 crc kubenswrapper[4799]: E0216 13:13:54.150196 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:14:08 crc kubenswrapper[4799]: I0216 13:14:08.150047 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:14:08 crc kubenswrapper[4799]: E0216 13:14:08.150976 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:14:20 crc kubenswrapper[4799]: I0216 13:14:20.150179 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:14:20 crc kubenswrapper[4799]: E0216 13:14:20.150855 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:14:28 crc kubenswrapper[4799]: I0216 13:14:28.136998 4799 generic.go:334] "Generic (PLEG): container finished" podID="9ecaed67-149c-4202-b3c9-c186d68a4b9a" containerID="79802cb2b22ab3a46e8f2e7fb2043ccb96a403425feee6f906783602c5799e4a" exitCode=0 Feb 16 13:14:28 crc kubenswrapper[4799]: I0216 13:14:28.137086 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" event={"ID":"9ecaed67-149c-4202-b3c9-c186d68a4b9a","Type":"ContainerDied","Data":"79802cb2b22ab3a46e8f2e7fb2043ccb96a403425feee6f906783602c5799e4a"} Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.556638 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593652 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-inventory\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593708 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-1\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593756 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-ssh-key-openstack-edpm-ipam\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593785 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-1\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593821 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-0\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593839 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rnj9\" (UniqueName: \"kubernetes.io/projected/9ecaed67-149c-4202-b3c9-c186d68a4b9a-kube-api-access-8rnj9\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593888 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-extra-config-0\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.593942 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-combined-ca-bundle\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.594043 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-0\") pod \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\" (UID: \"9ecaed67-149c-4202-b3c9-c186d68a4b9a\") " Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.606541 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecaed67-149c-4202-b3c9-c186d68a4b9a-kube-api-access-8rnj9" (OuterVolumeSpecName: "kube-api-access-8rnj9") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "kube-api-access-8rnj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.610271 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.630963 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.630987 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.631344 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-inventory" (OuterVolumeSpecName: "inventory") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.633509 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.633718 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.636997 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.642354 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9ecaed67-149c-4202-b3c9-c186d68a4b9a" (UID: "9ecaed67-149c-4202-b3c9-c186d68a4b9a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696027 4799 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696069 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696079 4799 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696092 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696102 4799 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696110 4799 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696130 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rnj9\" (UniqueName: \"kubernetes.io/projected/9ecaed67-149c-4202-b3c9-c186d68a4b9a-kube-api-access-8rnj9\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696140 4799 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:29 crc kubenswrapper[4799]: I0216 13:14:29.696148 4799 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecaed67-149c-4202-b3c9-c186d68a4b9a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.156745 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" event={"ID":"9ecaed67-149c-4202-b3c9-c186d68a4b9a","Type":"ContainerDied","Data":"ab3ceae03c498a329ac5fe968353ced32901bd16d1144be37b0723331843cf19"} Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.156775 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3ceae03c498a329ac5fe968353ced32901bd16d1144be37b0723331843cf19" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.156836 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-zr78d" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.264733 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9"] Feb 16 13:14:30 crc kubenswrapper[4799]: E0216 13:14:30.265274 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecaed67-149c-4202-b3c9-c186d68a4b9a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.265295 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecaed67-149c-4202-b3c9-c186d68a4b9a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.265509 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecaed67-149c-4202-b3c9-c186d68a4b9a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.266226 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.270646 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.270961 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x4vbs" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.270969 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.271336 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.271566 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.306505 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z65pg\" (UniqueName: \"kubernetes.io/projected/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-kube-api-access-z65pg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.306805 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.306991 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.307263 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.307396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.307429 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.307551 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9"] Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.307637 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.409669 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.410160 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z65pg\" (UniqueName: \"kubernetes.io/projected/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-kube-api-access-z65pg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.410223 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.410337 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.410445 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.410540 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.410589 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.415515 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.415515 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.417095 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.418077 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.418326 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.418830 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.432331 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z65pg\" (UniqueName: \"kubernetes.io/projected/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-kube-api-access-z65pg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:30 crc kubenswrapper[4799]: I0216 13:14:30.622454 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:14:31 crc kubenswrapper[4799]: I0216 13:14:31.192629 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9"] Feb 16 13:14:32 crc kubenswrapper[4799]: I0216 13:14:32.180552 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" event={"ID":"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7","Type":"ContainerStarted","Data":"1186a644b512ecb435c33d8945f503c3000ed5194a390cd29779140e2a2278b1"} Feb 16 13:14:32 crc kubenswrapper[4799]: I0216 13:14:32.180872 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" event={"ID":"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7","Type":"ContainerStarted","Data":"2682cb88c260620d558550e6749792538e798bfd804eecc73503378dc832b8fa"} Feb 16 13:14:32 crc kubenswrapper[4799]: I0216 13:14:32.205362 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" podStartSLOduration=1.6645278220000002 podStartE2EDuration="2.205342013s" podCreationTimestamp="2026-02-16 13:14:30 +0000 UTC" firstStartedPulling="2026-02-16 13:14:31.195985793 +0000 UTC m=+2576.789001127" lastFinishedPulling="2026-02-16 13:14:31.736799984 +0000 UTC m=+2577.329815318" observedRunningTime="2026-02-16 13:14:32.201665918 +0000 UTC m=+2577.794681252" watchObservedRunningTime="2026-02-16 13:14:32.205342013 +0000 UTC m=+2577.798357347" Feb 16 13:14:35 crc kubenswrapper[4799]: I0216 13:14:35.157833 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:14:35 crc kubenswrapper[4799]: E0216 13:14:35.159099 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:14:46 crc kubenswrapper[4799]: I0216 13:14:46.149274 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:14:46 crc kubenswrapper[4799]: E0216 13:14:46.149999 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.149552 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w"] Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.153498 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.155964 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.156166 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.166415 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w"] Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.218671 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21c958a8-65bd-4c54-8136-a8357a69d67b-secret-volume\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.219445 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5s55\" (UniqueName: \"kubernetes.io/projected/21c958a8-65bd-4c54-8136-a8357a69d67b-kube-api-access-w5s55\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.219556 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21c958a8-65bd-4c54-8136-a8357a69d67b-config-volume\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.321232 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21c958a8-65bd-4c54-8136-a8357a69d67b-secret-volume\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.321304 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5s55\" (UniqueName: \"kubernetes.io/projected/21c958a8-65bd-4c54-8136-a8357a69d67b-kube-api-access-w5s55\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.321337 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21c958a8-65bd-4c54-8136-a8357a69d67b-config-volume\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.322464 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21c958a8-65bd-4c54-8136-a8357a69d67b-config-volume\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.343402 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21c958a8-65bd-4c54-8136-a8357a69d67b-secret-volume\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.346215 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5s55\" (UniqueName: \"kubernetes.io/projected/21c958a8-65bd-4c54-8136-a8357a69d67b-kube-api-access-w5s55\") pod \"collect-profiles-29520795-5882w\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.476997 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:00 crc kubenswrapper[4799]: I0216 13:15:00.919663 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w"] Feb 16 13:15:01 crc kubenswrapper[4799]: I0216 13:15:01.149611 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:15:01 crc kubenswrapper[4799]: E0216 13:15:01.150398 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:15:01 crc kubenswrapper[4799]: I0216 13:15:01.491698 4799 generic.go:334] "Generic (PLEG): container finished" podID="21c958a8-65bd-4c54-8136-a8357a69d67b" containerID="614e5c11ad6d4723da3490c631afd77844d96f43156bfad5654994db3fb07fc4" exitCode=0 Feb 16 13:15:01 crc kubenswrapper[4799]: I0216 13:15:01.491764 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" event={"ID":"21c958a8-65bd-4c54-8136-a8357a69d67b","Type":"ContainerDied","Data":"614e5c11ad6d4723da3490c631afd77844d96f43156bfad5654994db3fb07fc4"} Feb 16 13:15:01 crc kubenswrapper[4799]: I0216 13:15:01.492062 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" event={"ID":"21c958a8-65bd-4c54-8136-a8357a69d67b","Type":"ContainerStarted","Data":"f153bf6809421371a0053cc5be7d3de3681fae85615c6be3f331f707e3904b12"} Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.907663 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.979351 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21c958a8-65bd-4c54-8136-a8357a69d67b-config-volume\") pod \"21c958a8-65bd-4c54-8136-a8357a69d67b\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.979507 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5s55\" (UniqueName: \"kubernetes.io/projected/21c958a8-65bd-4c54-8136-a8357a69d67b-kube-api-access-w5s55\") pod \"21c958a8-65bd-4c54-8136-a8357a69d67b\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.979578 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21c958a8-65bd-4c54-8136-a8357a69d67b-secret-volume\") pod \"21c958a8-65bd-4c54-8136-a8357a69d67b\" (UID: \"21c958a8-65bd-4c54-8136-a8357a69d67b\") " Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.980885 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21c958a8-65bd-4c54-8136-a8357a69d67b-config-volume" (OuterVolumeSpecName: "config-volume") pod "21c958a8-65bd-4c54-8136-a8357a69d67b" (UID: "21c958a8-65bd-4c54-8136-a8357a69d67b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.987698 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c958a8-65bd-4c54-8136-a8357a69d67b-kube-api-access-w5s55" (OuterVolumeSpecName: "kube-api-access-w5s55") pod "21c958a8-65bd-4c54-8136-a8357a69d67b" (UID: "21c958a8-65bd-4c54-8136-a8357a69d67b"). InnerVolumeSpecName "kube-api-access-w5s55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:15:02 crc kubenswrapper[4799]: I0216 13:15:02.989583 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c958a8-65bd-4c54-8136-a8357a69d67b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "21c958a8-65bd-4c54-8136-a8357a69d67b" (UID: "21c958a8-65bd-4c54-8136-a8357a69d67b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.082038 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5s55\" (UniqueName: \"kubernetes.io/projected/21c958a8-65bd-4c54-8136-a8357a69d67b-kube-api-access-w5s55\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.082516 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/21c958a8-65bd-4c54-8136-a8357a69d67b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.082601 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21c958a8-65bd-4c54-8136-a8357a69d67b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.515021 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" event={"ID":"21c958a8-65bd-4c54-8136-a8357a69d67b","Type":"ContainerDied","Data":"f153bf6809421371a0053cc5be7d3de3681fae85615c6be3f331f707e3904b12"} Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.515064 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f153bf6809421371a0053cc5be7d3de3681fae85615c6be3f331f707e3904b12" Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.515090 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w" Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.986300 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l"] Feb 16 13:15:03 crc kubenswrapper[4799]: I0216 13:15:03.994694 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520750-5sn7l"] Feb 16 13:15:05 crc kubenswrapper[4799]: I0216 13:15:05.161959 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ab08e0-f4bc-4dcc-abaf-876b063165ad" path="/var/lib/kubelet/pods/e6ab08e0-f4bc-4dcc-abaf-876b063165ad/volumes" Feb 16 13:15:15 crc kubenswrapper[4799]: I0216 13:15:15.157412 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:15:15 crc kubenswrapper[4799]: E0216 13:15:15.158243 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:15:26 crc kubenswrapper[4799]: I0216 13:15:26.149078 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:15:26 crc kubenswrapper[4799]: E0216 13:15:26.150081 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:15:41 crc kubenswrapper[4799]: I0216 13:15:41.150167 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:15:41 crc kubenswrapper[4799]: E0216 13:15:41.151484 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:15:43 crc kubenswrapper[4799]: I0216 13:15:43.747722 4799 scope.go:117] "RemoveContainer" containerID="aaa7a0ce9bbd09bbe65107188212b2ff4c9b1f30ecbce2fafc52dbfbbfd09d09" Feb 16 13:15:52 crc kubenswrapper[4799]: I0216 13:15:52.150388 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:15:52 crc kubenswrapper[4799]: E0216 13:15:52.151204 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:16:04 crc kubenswrapper[4799]: I0216 13:16:04.150568 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:16:04 crc kubenswrapper[4799]: E0216 13:16:04.151740 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:16:16 crc kubenswrapper[4799]: I0216 13:16:16.151175 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:16:16 crc kubenswrapper[4799]: E0216 13:16:16.152031 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:16:20 crc kubenswrapper[4799]: I0216 13:16:20.286502 4799 generic.go:334] "Generic (PLEG): container finished" podID="8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" containerID="1186a644b512ecb435c33d8945f503c3000ed5194a390cd29779140e2a2278b1" exitCode=0 Feb 16 13:16:20 crc kubenswrapper[4799]: I0216 13:16:20.286621 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" event={"ID":"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7","Type":"ContainerDied","Data":"1186a644b512ecb435c33d8945f503c3000ed5194a390cd29779140e2a2278b1"} Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.765856 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875082 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-1\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875624 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-2\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875661 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-inventory\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875723 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ssh-key-openstack-edpm-ipam\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875758 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z65pg\" (UniqueName: \"kubernetes.io/projected/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-kube-api-access-z65pg\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875898 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-0\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.875981 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-telemetry-combined-ca-bundle\") pod \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\" (UID: \"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7\") " Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.880373 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-kube-api-access-z65pg" (OuterVolumeSpecName: "kube-api-access-z65pg") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "kube-api-access-z65pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.880857 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.905684 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-inventory" (OuterVolumeSpecName: "inventory") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.910483 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.912164 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.913549 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.917361 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" (UID: "8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978539 4799 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978570 4799 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978585 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978598 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z65pg\" (UniqueName: \"kubernetes.io/projected/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-kube-api-access-z65pg\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978609 4799 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978622 4799 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:21 crc kubenswrapper[4799]: I0216 13:16:21.978635 4799 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:22 crc kubenswrapper[4799]: I0216 13:16:22.308992 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" event={"ID":"8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7","Type":"ContainerDied","Data":"2682cb88c260620d558550e6749792538e798bfd804eecc73503378dc832b8fa"} Feb 16 13:16:22 crc kubenswrapper[4799]: I0216 13:16:22.309040 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2682cb88c260620d558550e6749792538e798bfd804eecc73503378dc832b8fa" Feb 16 13:16:22 crc kubenswrapper[4799]: I0216 13:16:22.309100 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9" Feb 16 13:16:28 crc kubenswrapper[4799]: I0216 13:16:28.149231 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:16:28 crc kubenswrapper[4799]: E0216 13:16:28.149992 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.944957 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zhx8q"] Feb 16 13:16:29 crc kubenswrapper[4799]: E0216 13:16:29.945784 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.945798 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:16:29 crc kubenswrapper[4799]: E0216 13:16:29.945811 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c958a8-65bd-4c54-8136-a8357a69d67b" containerName="collect-profiles" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.945818 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c958a8-65bd-4c54-8136-a8357a69d67b" containerName="collect-profiles" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.946037 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c958a8-65bd-4c54-8136-a8357a69d67b" containerName="collect-profiles" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.946051 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.947693 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:29 crc kubenswrapper[4799]: I0216 13:16:29.961055 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zhx8q"] Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.036818 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-utilities\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.036950 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfwth\" (UniqueName: \"kubernetes.io/projected/74743349-9245-42d6-956d-c830833e0ca0-kube-api-access-zfwth\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.037001 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-catalog-content\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.139218 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-utilities\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.139544 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfwth\" (UniqueName: \"kubernetes.io/projected/74743349-9245-42d6-956d-c830833e0ca0-kube-api-access-zfwth\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.139658 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-catalog-content\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.140232 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-utilities\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.141020 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-catalog-content\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.170463 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfwth\" (UniqueName: \"kubernetes.io/projected/74743349-9245-42d6-956d-c830833e0ca0-kube-api-access-zfwth\") pod \"certified-operators-zhx8q\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.267428 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:30 crc kubenswrapper[4799]: I0216 13:16:30.589708 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zhx8q"] Feb 16 13:16:31 crc kubenswrapper[4799]: I0216 13:16:31.407499 4799 generic.go:334] "Generic (PLEG): container finished" podID="74743349-9245-42d6-956d-c830833e0ca0" containerID="0adbaea8654cedd969a997a181e30dd2d795e659a25e95671b5e5e6c98d7a5ea" exitCode=0 Feb 16 13:16:31 crc kubenswrapper[4799]: I0216 13:16:31.407557 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerDied","Data":"0adbaea8654cedd969a997a181e30dd2d795e659a25e95671b5e5e6c98d7a5ea"} Feb 16 13:16:31 crc kubenswrapper[4799]: I0216 13:16:31.407794 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerStarted","Data":"31a3236565a945a79c80cb308c5665120d5adee4e570562bddce8524dab79b94"} Feb 16 13:16:32 crc kubenswrapper[4799]: I0216 13:16:32.420757 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerStarted","Data":"84d6b48fe017eb6a60e1878734ea792c0278c22a47e0ad0491c48a8d9da7a985"} Feb 16 13:16:33 crc kubenswrapper[4799]: I0216 13:16:33.431779 4799 generic.go:334] "Generic (PLEG): container finished" podID="74743349-9245-42d6-956d-c830833e0ca0" containerID="84d6b48fe017eb6a60e1878734ea792c0278c22a47e0ad0491c48a8d9da7a985" exitCode=0 Feb 16 13:16:33 crc kubenswrapper[4799]: I0216 13:16:33.431873 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerDied","Data":"84d6b48fe017eb6a60e1878734ea792c0278c22a47e0ad0491c48a8d9da7a985"} Feb 16 13:16:34 crc kubenswrapper[4799]: I0216 13:16:34.444304 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerStarted","Data":"15006cba0997197bac45386f53c4fc395400b6f535a0914f2128b4841f0ce504"} Feb 16 13:16:34 crc kubenswrapper[4799]: I0216 13:16:34.473598 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zhx8q" podStartSLOduration=2.943725717 podStartE2EDuration="5.47357704s" podCreationTimestamp="2026-02-16 13:16:29 +0000 UTC" firstStartedPulling="2026-02-16 13:16:31.409512884 +0000 UTC m=+2697.002528218" lastFinishedPulling="2026-02-16 13:16:33.939364207 +0000 UTC m=+2699.532379541" observedRunningTime="2026-02-16 13:16:34.468539616 +0000 UTC m=+2700.061554950" watchObservedRunningTime="2026-02-16 13:16:34.47357704 +0000 UTC m=+2700.066592374" Feb 16 13:16:39 crc kubenswrapper[4799]: I0216 13:16:39.151654 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:16:39 crc kubenswrapper[4799]: E0216 13:16:39.152942 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.108057 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7djg"] Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.111378 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.125922 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7djg"] Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.267803 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.267913 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.293163 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-utilities\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.293249 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-catalog-content\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.293342 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5r8d\" (UniqueName: \"kubernetes.io/projected/6764a7eb-f0f0-4145-877a-2dbc93eeb442-kube-api-access-m5r8d\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.316190 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.395265 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5r8d\" (UniqueName: \"kubernetes.io/projected/6764a7eb-f0f0-4145-877a-2dbc93eeb442-kube-api-access-m5r8d\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.396421 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-utilities\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.396809 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-utilities\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.396863 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-catalog-content\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.397184 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-catalog-content\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.416690 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5r8d\" (UniqueName: \"kubernetes.io/projected/6764a7eb-f0f0-4145-877a-2dbc93eeb442-kube-api-access-m5r8d\") pod \"redhat-operators-x7djg\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.438437 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.564405 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:40 crc kubenswrapper[4799]: I0216 13:16:40.940928 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7djg"] Feb 16 13:16:41 crc kubenswrapper[4799]: I0216 13:16:41.507544 4799 generic.go:334] "Generic (PLEG): container finished" podID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerID="1b63c45b91b5bd3eda52e3e06d372d8b57e991d434156cd30514c065004d8911" exitCode=0 Feb 16 13:16:41 crc kubenswrapper[4799]: I0216 13:16:41.507667 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerDied","Data":"1b63c45b91b5bd3eda52e3e06d372d8b57e991d434156cd30514c065004d8911"} Feb 16 13:16:41 crc kubenswrapper[4799]: I0216 13:16:41.507965 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerStarted","Data":"29e668f14ca897048b1b7d9ae2773065b6c20969a97955809e38c7367d0f28f2"} Feb 16 13:16:42 crc kubenswrapper[4799]: I0216 13:16:42.520325 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerStarted","Data":"064436bd67494b05852f3989070e2c5facae0c5be4abd30f7fe9f6456201d639"} Feb 16 13:16:42 crc kubenswrapper[4799]: I0216 13:16:42.676400 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zhx8q"] Feb 16 13:16:43 crc kubenswrapper[4799]: I0216 13:16:43.528068 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zhx8q" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="registry-server" containerID="cri-o://15006cba0997197bac45386f53c4fc395400b6f535a0914f2128b4841f0ce504" gracePeriod=2 Feb 16 13:16:45 crc kubenswrapper[4799]: I0216 13:16:45.552319 4799 generic.go:334] "Generic (PLEG): container finished" podID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerID="064436bd67494b05852f3989070e2c5facae0c5be4abd30f7fe9f6456201d639" exitCode=0 Feb 16 13:16:45 crc kubenswrapper[4799]: I0216 13:16:45.552372 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerDied","Data":"064436bd67494b05852f3989070e2c5facae0c5be4abd30f7fe9f6456201d639"} Feb 16 13:16:47 crc kubenswrapper[4799]: I0216 13:16:47.572778 4799 generic.go:334] "Generic (PLEG): container finished" podID="74743349-9245-42d6-956d-c830833e0ca0" containerID="15006cba0997197bac45386f53c4fc395400b6f535a0914f2128b4841f0ce504" exitCode=0 Feb 16 13:16:47 crc kubenswrapper[4799]: I0216 13:16:47.572845 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerDied","Data":"15006cba0997197bac45386f53c4fc395400b6f535a0914f2128b4841f0ce504"} Feb 16 13:16:47 crc kubenswrapper[4799]: I0216 13:16:47.574555 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerStarted","Data":"eb7de8849194d57a339e057ea6e74575c1f5fbcb3bcd8c71fdf31e434eca4bd5"} Feb 16 13:16:47 crc kubenswrapper[4799]: I0216 13:16:47.602537 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7djg" podStartSLOduration=1.8354358130000001 podStartE2EDuration="7.602513982s" podCreationTimestamp="2026-02-16 13:16:40 +0000 UTC" firstStartedPulling="2026-02-16 13:16:41.509515524 +0000 UTC m=+2707.102530858" lastFinishedPulling="2026-02-16 13:16:47.276593683 +0000 UTC m=+2712.869609027" observedRunningTime="2026-02-16 13:16:47.595274986 +0000 UTC m=+2713.188290340" watchObservedRunningTime="2026-02-16 13:16:47.602513982 +0000 UTC m=+2713.195529326" Feb 16 13:16:47 crc kubenswrapper[4799]: I0216 13:16:47.852823 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.055511 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-catalog-content\") pod \"74743349-9245-42d6-956d-c830833e0ca0\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.055771 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfwth\" (UniqueName: \"kubernetes.io/projected/74743349-9245-42d6-956d-c830833e0ca0-kube-api-access-zfwth\") pod \"74743349-9245-42d6-956d-c830833e0ca0\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.055867 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-utilities\") pod \"74743349-9245-42d6-956d-c830833e0ca0\" (UID: \"74743349-9245-42d6-956d-c830833e0ca0\") " Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.056490 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-utilities" (OuterVolumeSpecName: "utilities") pod "74743349-9245-42d6-956d-c830833e0ca0" (UID: "74743349-9245-42d6-956d-c830833e0ca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.062794 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74743349-9245-42d6-956d-c830833e0ca0-kube-api-access-zfwth" (OuterVolumeSpecName: "kube-api-access-zfwth") pod "74743349-9245-42d6-956d-c830833e0ca0" (UID: "74743349-9245-42d6-956d-c830833e0ca0"). InnerVolumeSpecName "kube-api-access-zfwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.103769 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74743349-9245-42d6-956d-c830833e0ca0" (UID: "74743349-9245-42d6-956d-c830833e0ca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.159041 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfwth\" (UniqueName: \"kubernetes.io/projected/74743349-9245-42d6-956d-c830833e0ca0-kube-api-access-zfwth\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.159080 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.159098 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74743349-9245-42d6-956d-c830833e0ca0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.585958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8q" event={"ID":"74743349-9245-42d6-956d-c830833e0ca0","Type":"ContainerDied","Data":"31a3236565a945a79c80cb308c5665120d5adee4e570562bddce8524dab79b94"} Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.586026 4799 scope.go:117] "RemoveContainer" containerID="15006cba0997197bac45386f53c4fc395400b6f535a0914f2128b4841f0ce504" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.586967 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8q" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.615804 4799 scope.go:117] "RemoveContainer" containerID="84d6b48fe017eb6a60e1878734ea792c0278c22a47e0ad0491c48a8d9da7a985" Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.621900 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zhx8q"] Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.630191 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zhx8q"] Feb 16 13:16:48 crc kubenswrapper[4799]: I0216 13:16:48.649847 4799 scope.go:117] "RemoveContainer" containerID="0adbaea8654cedd969a997a181e30dd2d795e659a25e95671b5e5e6c98d7a5ea" Feb 16 13:16:49 crc kubenswrapper[4799]: I0216 13:16:49.162650 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74743349-9245-42d6-956d-c830833e0ca0" path="/var/lib/kubelet/pods/74743349-9245-42d6-956d-c830833e0ca0/volumes" Feb 16 13:16:50 crc kubenswrapper[4799]: I0216 13:16:50.439086 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:50 crc kubenswrapper[4799]: I0216 13:16:50.439180 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:16:51 crc kubenswrapper[4799]: I0216 13:16:51.491328 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7djg" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" probeResult="failure" output=< Feb 16 13:16:51 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:16:51 crc kubenswrapper[4799]: > Feb 16 13:16:52 crc kubenswrapper[4799]: I0216 13:16:52.149768 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:16:52 crc kubenswrapper[4799]: I0216 13:16:52.621334 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"0f455c79e14fa1b0be07e059ec5a15012005a44a11fd4803ca25a5d892387d70"} Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.467013 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 16 13:16:56 crc kubenswrapper[4799]: E0216 13:16:56.467956 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="registry-server" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.467971 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="registry-server" Feb 16 13:16:56 crc kubenswrapper[4799]: E0216 13:16:56.467989 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="extract-content" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.467996 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="extract-content" Feb 16 13:16:56 crc kubenswrapper[4799]: E0216 13:16:56.468015 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="extract-utilities" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.468021 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="extract-utilities" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.468288 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="74743349-9245-42d6-956d-c830833e0ca0" containerName="registry-server" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.469276 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.472648 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.488096 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.495858 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.495921 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-dev\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496031 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496338 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-sys\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496426 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-config-data\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496458 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-lib-modules\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496474 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496508 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-run\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496539 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496565 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496592 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496613 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496639 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-scripts\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496658 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnln6\" (UniqueName: \"kubernetes.io/projected/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-kube-api-access-nnln6\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.496743 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.570610 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.572623 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.576151 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.591902 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598641 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598702 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-sys\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598732 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598753 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598779 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-sys\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598762 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-run\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598870 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-dev\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598910 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-config-data\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598943 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-lib-modules\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598967 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.598989 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599023 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-run\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599033 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-lib-modules\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599051 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599088 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-run\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599167 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599201 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599248 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599286 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-scripts\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599334 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnln6\" (UniqueName: \"kubernetes.io/projected/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-kube-api-access-nnln6\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599365 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599376 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599426 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599492 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599541 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599552 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599571 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599621 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-dev\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599677 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599707 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599803 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599829 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4vl\" (UniqueName: \"kubernetes.io/projected/64beb0d2-7a13-4a86-b4f8-8843611c254c-kube-api-access-5n4vl\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599855 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599883 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.599914 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-sys\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.600832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-dev\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.607090 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.610068 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.611138 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.611183 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.611476 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-config-data\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.612734 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.614425 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.616067 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-scripts\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.616140 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.618243 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.626275 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.638236 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnln6\" (UniqueName: \"kubernetes.io/projected/ea67e1e3-d03f-49fa-a150-9ff09fca74ba-kube-api-access-nnln6\") pod \"cinder-backup-0\" (UID: \"ea67e1e3-d03f-49fa-a150-9ff09fca74ba\") " pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.701448 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.701512 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.701923 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.701533 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4vl\" (UniqueName: \"kubernetes.io/projected/64beb0d2-7a13-4a86-b4f8-8843611c254c-kube-api-access-5n4vl\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.705707 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.705742 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-sys\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.705774 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.705807 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.705853 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.706469 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.706733 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-sys\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710204 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710254 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710319 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710362 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710414 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-run\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710436 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710502 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-run\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.710517 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712235 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-dev\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712322 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712379 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712463 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712490 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712590 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712631 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712656 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712696 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712755 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712789 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712825 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.712846 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsct\" (UniqueName: \"kubernetes.io/projected/5f3698ec-879f-4ead-8ac9-e08fa64c655e-kube-api-access-kfsct\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713763 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713016 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-dev\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713249 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713296 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713833 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713895 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713918 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713049 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.713377 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64beb0d2-7a13-4a86-b4f8-8843611c254c-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.717030 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.721573 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.727783 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64beb0d2-7a13-4a86-b4f8-8843611c254c-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.729251 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4vl\" (UniqueName: \"kubernetes.io/projected/64beb0d2-7a13-4a86-b4f8-8843611c254c-kube-api-access-5n4vl\") pod \"cinder-volume-nfs-0\" (UID: \"64beb0d2-7a13-4a86-b4f8-8843611c254c\") " pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.785740 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816221 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816296 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816317 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816338 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816352 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816378 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816461 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816497 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816544 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816563 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816596 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816613 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816629 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsct\" (UniqueName: \"kubernetes.io/projected/5f3698ec-879f-4ead-8ac9-e08fa64c655e-kube-api-access-kfsct\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816648 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816752 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816796 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.816839 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.817221 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.817263 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.817298 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.817327 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.817373 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.817758 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f3698ec-879f-4ead-8ac9-e08fa64c655e-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.820353 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.820832 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.821181 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.823368 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f3698ec-879f-4ead-8ac9-e08fa64c655e-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.832292 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsct\" (UniqueName: \"kubernetes.io/projected/5f3698ec-879f-4ead-8ac9-e08fa64c655e-kube-api-access-kfsct\") pod \"cinder-volume-nfs-2-0\" (UID: \"5f3698ec-879f-4ead-8ac9-e08fa64c655e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:56 crc kubenswrapper[4799]: I0216 13:16:56.909627 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 16 13:16:57 crc kubenswrapper[4799]: I0216 13:16:57.111631 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:16:57 crc kubenswrapper[4799]: I0216 13:16:57.544989 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 16 13:16:57 crc kubenswrapper[4799]: I0216 13:16:57.683631 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ea67e1e3-d03f-49fa-a150-9ff09fca74ba","Type":"ContainerStarted","Data":"6b4bc8b4647a95203d254e9fa420d7c813afc7d258a18799981e7b545b2f0b92"} Feb 16 13:16:57 crc kubenswrapper[4799]: I0216 13:16:57.805471 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 16 13:16:58 crc kubenswrapper[4799]: I0216 13:16:58.693436 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"5f3698ec-879f-4ead-8ac9-e08fa64c655e","Type":"ContainerStarted","Data":"6dbdfbc9a4fd2c3521e475cfaeb47f3fa38775dd5fe0aea5cb77b63b2a152349"} Feb 16 13:16:58 crc kubenswrapper[4799]: I0216 13:16:58.693860 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 16 13:16:58 crc kubenswrapper[4799]: W0216 13:16:58.820142 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64beb0d2_7a13_4a86_b4f8_8843611c254c.slice/crio-df59339abb55c7de9ff4c3330de1748ecba1514a26b54d1116d8ed00bbfa752b WatchSource:0}: Error finding container df59339abb55c7de9ff4c3330de1748ecba1514a26b54d1116d8ed00bbfa752b: Status 404 returned error can't find the container with id df59339abb55c7de9ff4c3330de1748ecba1514a26b54d1116d8ed00bbfa752b Feb 16 13:16:59 crc kubenswrapper[4799]: I0216 13:16:59.708052 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"64beb0d2-7a13-4a86-b4f8-8843611c254c","Type":"ContainerStarted","Data":"df59339abb55c7de9ff4c3330de1748ecba1514a26b54d1116d8ed00bbfa752b"} Feb 16 13:16:59 crc kubenswrapper[4799]: I0216 13:16:59.711374 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ea67e1e3-d03f-49fa-a150-9ff09fca74ba","Type":"ContainerStarted","Data":"7694ac6c966e9540632cffe1f7119001a6e8a4150e21d6228d993c4da84e4b7e"} Feb 16 13:17:00 crc kubenswrapper[4799]: I0216 13:17:00.725752 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"64beb0d2-7a13-4a86-b4f8-8843611c254c","Type":"ContainerStarted","Data":"41b2fbbd75ede3105e1e6d0da28a5eb77abc1618a01f2b51dd1ff216bcc527e5"} Feb 16 13:17:00 crc kubenswrapper[4799]: I0216 13:17:00.728335 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"5f3698ec-879f-4ead-8ac9-e08fa64c655e","Type":"ContainerStarted","Data":"aec3a204684b73d4047e0b918d458ff0643faafb2f60bce2e7aab95feb362688"} Feb 16 13:17:00 crc kubenswrapper[4799]: I0216 13:17:00.730956 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ea67e1e3-d03f-49fa-a150-9ff09fca74ba","Type":"ContainerStarted","Data":"60c7853dc675071376782893dc63994f23d94b6060133e9b07f1430ce42ef584"} Feb 16 13:17:00 crc kubenswrapper[4799]: I0216 13:17:00.759531 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.239542429 podStartE2EDuration="4.759511767s" podCreationTimestamp="2026-02-16 13:16:56 +0000 UTC" firstStartedPulling="2026-02-16 13:16:57.550040643 +0000 UTC m=+2723.143055977" lastFinishedPulling="2026-02-16 13:16:59.070009981 +0000 UTC m=+2724.663025315" observedRunningTime="2026-02-16 13:17:00.75470113 +0000 UTC m=+2726.347716464" watchObservedRunningTime="2026-02-16 13:17:00.759511767 +0000 UTC m=+2726.352527111" Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.521046 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7djg" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" probeResult="failure" output=< Feb 16 13:17:01 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:17:01 crc kubenswrapper[4799]: > Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.741306 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"5f3698ec-879f-4ead-8ac9-e08fa64c655e","Type":"ContainerStarted","Data":"7c219c3b6e4f64cae442e7449349427ff9917cbb6d7222b3f72903d940bd50f4"} Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.757031 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"64beb0d2-7a13-4a86-b4f8-8843611c254c","Type":"ContainerStarted","Data":"77bceca59db60b6902801007d41dd4001e4b52f05e228928204269b5dec5cffa"} Feb 16 13:17:01 crc kubenswrapper[4799]: W0216 13:17:01.787356 4799 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/user@0.service/app.slice/systemd-tmpfiles-setup.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/user@0.service/app.slice/systemd-tmpfiles-setup.service: no such file or directory Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.793394 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.808708 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.429040434 podStartE2EDuration="5.808689852s" podCreationTimestamp="2026-02-16 13:16:56 +0000 UTC" firstStartedPulling="2026-02-16 13:16:57.82123804 +0000 UTC m=+2723.414253374" lastFinishedPulling="2026-02-16 13:17:00.200887458 +0000 UTC m=+2725.793902792" observedRunningTime="2026-02-16 13:17:01.761467705 +0000 UTC m=+2727.354483049" watchObservedRunningTime="2026-02-16 13:17:01.808689852 +0000 UTC m=+2727.401705186" Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.810727 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=4.538410097 podStartE2EDuration="5.81071763s" podCreationTimestamp="2026-02-16 13:16:56 +0000 UTC" firstStartedPulling="2026-02-16 13:16:58.923520201 +0000 UTC m=+2724.516535535" lastFinishedPulling="2026-02-16 13:17:00.195827734 +0000 UTC m=+2725.788843068" observedRunningTime="2026-02-16 13:17:01.803559016 +0000 UTC m=+2727.396574340" watchObservedRunningTime="2026-02-16 13:17:01.81071763 +0000 UTC m=+2727.403732964" Feb 16 13:17:01 crc kubenswrapper[4799]: I0216 13:17:01.911786 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 16 13:17:02 crc kubenswrapper[4799]: I0216 13:17:02.111690 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:17:06 crc kubenswrapper[4799]: I0216 13:17:06.965071 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 16 13:17:07 crc kubenswrapper[4799]: I0216 13:17:07.134476 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 16 13:17:07 crc kubenswrapper[4799]: I0216 13:17:07.321316 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 16 13:17:09 crc kubenswrapper[4799]: I0216 13:17:09.929187 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-82zmn"] Feb 16 13:17:09 crc kubenswrapper[4799]: I0216 13:17:09.932534 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:09 crc kubenswrapper[4799]: I0216 13:17:09.968874 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32789136-f921-4aee-9f3b-4f61c64cd97f-utilities\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:09 crc kubenswrapper[4799]: I0216 13:17:09.969081 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32789136-f921-4aee-9f3b-4f61c64cd97f-catalog-content\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:09 crc kubenswrapper[4799]: I0216 13:17:09.969249 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfkgm\" (UniqueName: \"kubernetes.io/projected/32789136-f921-4aee-9f3b-4f61c64cd97f-kube-api-access-jfkgm\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:09 crc kubenswrapper[4799]: I0216 13:17:09.973600 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82zmn"] Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.103395 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfkgm\" (UniqueName: \"kubernetes.io/projected/32789136-f921-4aee-9f3b-4f61c64cd97f-kube-api-access-jfkgm\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.104003 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32789136-f921-4aee-9f3b-4f61c64cd97f-utilities\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.104138 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32789136-f921-4aee-9f3b-4f61c64cd97f-catalog-content\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.104778 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32789136-f921-4aee-9f3b-4f61c64cd97f-catalog-content\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.105364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32789136-f921-4aee-9f3b-4f61c64cd97f-utilities\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.155504 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfkgm\" (UniqueName: \"kubernetes.io/projected/32789136-f921-4aee-9f3b-4f61c64cd97f-kube-api-access-jfkgm\") pod \"community-operators-82zmn\" (UID: \"32789136-f921-4aee-9f3b-4f61c64cd97f\") " pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:10 crc kubenswrapper[4799]: I0216 13:17:10.276695 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:11 crc kubenswrapper[4799]: I0216 13:17:11.001806 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82zmn"] Feb 16 13:17:11 crc kubenswrapper[4799]: I0216 13:17:11.503634 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7djg" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" probeResult="failure" output=< Feb 16 13:17:11 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:17:11 crc kubenswrapper[4799]: > Feb 16 13:17:11 crc kubenswrapper[4799]: I0216 13:17:11.911570 4799 generic.go:334] "Generic (PLEG): container finished" podID="32789136-f921-4aee-9f3b-4f61c64cd97f" containerID="1c2751b6de8082cd9f36faa2bd05d19044e7a20c517641d707e4eceb55771b89" exitCode=0 Feb 16 13:17:11 crc kubenswrapper[4799]: I0216 13:17:11.911782 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82zmn" event={"ID":"32789136-f921-4aee-9f3b-4f61c64cd97f","Type":"ContainerDied","Data":"1c2751b6de8082cd9f36faa2bd05d19044e7a20c517641d707e4eceb55771b89"} Feb 16 13:17:11 crc kubenswrapper[4799]: I0216 13:17:11.911904 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82zmn" event={"ID":"32789136-f921-4aee-9f3b-4f61c64cd97f","Type":"ContainerStarted","Data":"b82039abaae1f87c30ad0ee856d3793fd5e65a81f39940f0a8bd9495a029d46b"} Feb 16 13:17:11 crc kubenswrapper[4799]: I0216 13:17:11.916988 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:17:16 crc kubenswrapper[4799]: I0216 13:17:16.964966 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82zmn" event={"ID":"32789136-f921-4aee-9f3b-4f61c64cd97f","Type":"ContainerStarted","Data":"ab8d8940e77845e1ba77c1509f27bb57be21eb8c8ee1249800da52145dc6c492"} Feb 16 13:17:17 crc kubenswrapper[4799]: I0216 13:17:17.978305 4799 generic.go:334] "Generic (PLEG): container finished" podID="32789136-f921-4aee-9f3b-4f61c64cd97f" containerID="ab8d8940e77845e1ba77c1509f27bb57be21eb8c8ee1249800da52145dc6c492" exitCode=0 Feb 16 13:17:17 crc kubenswrapper[4799]: I0216 13:17:17.978357 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82zmn" event={"ID":"32789136-f921-4aee-9f3b-4f61c64cd97f","Type":"ContainerDied","Data":"ab8d8940e77845e1ba77c1509f27bb57be21eb8c8ee1249800da52145dc6c492"} Feb 16 13:17:18 crc kubenswrapper[4799]: I0216 13:17:18.992896 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82zmn" event={"ID":"32789136-f921-4aee-9f3b-4f61c64cd97f","Type":"ContainerStarted","Data":"6b44ac7bf649053eacc0133dbf4c6b76f12f8911f2876ce110c7722a18c3b2b2"} Feb 16 13:17:19 crc kubenswrapper[4799]: I0216 13:17:19.020934 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-82zmn" podStartSLOduration=3.525552704 podStartE2EDuration="10.020896752s" podCreationTimestamp="2026-02-16 13:17:09 +0000 UTC" firstStartedPulling="2026-02-16 13:17:11.916722682 +0000 UTC m=+2737.509738026" lastFinishedPulling="2026-02-16 13:17:18.41206674 +0000 UTC m=+2744.005082074" observedRunningTime="2026-02-16 13:17:19.0113497 +0000 UTC m=+2744.604365024" watchObservedRunningTime="2026-02-16 13:17:19.020896752 +0000 UTC m=+2744.613912086" Feb 16 13:17:20 crc kubenswrapper[4799]: I0216 13:17:20.278368 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:20 crc kubenswrapper[4799]: I0216 13:17:20.278743 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:21 crc kubenswrapper[4799]: I0216 13:17:21.324664 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-82zmn" podUID="32789136-f921-4aee-9f3b-4f61c64cd97f" containerName="registry-server" probeResult="failure" output=< Feb 16 13:17:21 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:17:21 crc kubenswrapper[4799]: > Feb 16 13:17:21 crc kubenswrapper[4799]: I0216 13:17:21.485171 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7djg" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" probeResult="failure" output=< Feb 16 13:17:21 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:17:21 crc kubenswrapper[4799]: > Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.329032 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.385012 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-82zmn" Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.459295 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82zmn"] Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.491088 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.555805 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.571276 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhx9b"] Feb 16 13:17:30 crc kubenswrapper[4799]: I0216 13:17:30.571483 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhx9b" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="registry-server" containerID="cri-o://37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00" gracePeriod=2 Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.096839 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.158379 4799 generic.go:334] "Generic (PLEG): container finished" podID="fc89a2f7-851b-473a-818c-db718947d490" containerID="37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00" exitCode=0 Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.159424 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhx9b" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.164094 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhx9b" event={"ID":"fc89a2f7-851b-473a-818c-db718947d490","Type":"ContainerDied","Data":"37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00"} Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.164148 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhx9b" event={"ID":"fc89a2f7-851b-473a-818c-db718947d490","Type":"ContainerDied","Data":"d438c7c93c31b3cd75623f3d254d559f41dcc190534e4257c2cb2671eb142453"} Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.164168 4799 scope.go:117] "RemoveContainer" containerID="37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.181203 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-catalog-content\") pod \"fc89a2f7-851b-473a-818c-db718947d490\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.181281 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-utilities\") pod \"fc89a2f7-851b-473a-818c-db718947d490\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.181372 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkr2v\" (UniqueName: \"kubernetes.io/projected/fc89a2f7-851b-473a-818c-db718947d490-kube-api-access-wkr2v\") pod \"fc89a2f7-851b-473a-818c-db718947d490\" (UID: \"fc89a2f7-851b-473a-818c-db718947d490\") " Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.187208 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-utilities" (OuterVolumeSpecName: "utilities") pod "fc89a2f7-851b-473a-818c-db718947d490" (UID: "fc89a2f7-851b-473a-818c-db718947d490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.190915 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc89a2f7-851b-473a-818c-db718947d490-kube-api-access-wkr2v" (OuterVolumeSpecName: "kube-api-access-wkr2v") pod "fc89a2f7-851b-473a-818c-db718947d490" (UID: "fc89a2f7-851b-473a-818c-db718947d490"). InnerVolumeSpecName "kube-api-access-wkr2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.196588 4799 scope.go:117] "RemoveContainer" containerID="8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.265674 4799 scope.go:117] "RemoveContainer" containerID="0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.279769 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc89a2f7-851b-473a-818c-db718947d490" (UID: "fc89a2f7-851b-473a-818c-db718947d490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.286683 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkr2v\" (UniqueName: \"kubernetes.io/projected/fc89a2f7-851b-473a-818c-db718947d490-kube-api-access-wkr2v\") on node \"crc\" DevicePath \"\"" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.286726 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.286735 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc89a2f7-851b-473a-818c-db718947d490-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.320273 4799 scope.go:117] "RemoveContainer" containerID="37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00" Feb 16 13:17:31 crc kubenswrapper[4799]: E0216 13:17:31.320715 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00\": container with ID starting with 37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00 not found: ID does not exist" containerID="37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.320745 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00"} err="failed to get container status \"37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00\": rpc error: code = NotFound desc = could not find container \"37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00\": container with ID starting with 37d1abe8299b958b336809ba7778dfe5c01c6fd380dac621cc575eb01013bb00 not found: ID does not exist" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.320764 4799 scope.go:117] "RemoveContainer" containerID="8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc" Feb 16 13:17:31 crc kubenswrapper[4799]: E0216 13:17:31.320987 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc\": container with ID starting with 8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc not found: ID does not exist" containerID="8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.321009 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc"} err="failed to get container status \"8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc\": rpc error: code = NotFound desc = could not find container \"8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc\": container with ID starting with 8c4315b5ef2a86a98107613aacb96fb848ffdd637e3d5f8a0ac4167d154de8cc not found: ID does not exist" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.321020 4799 scope.go:117] "RemoveContainer" containerID="0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7" Feb 16 13:17:31 crc kubenswrapper[4799]: E0216 13:17:31.321304 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7\": container with ID starting with 0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7 not found: ID does not exist" containerID="0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.321325 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7"} err="failed to get container status \"0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7\": rpc error: code = NotFound desc = could not find container \"0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7\": container with ID starting with 0bee87a6d618e814a161c5e7ee743752493897e7a0111645abbbf90059669ab7 not found: ID does not exist" Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.492562 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhx9b"] Feb 16 13:17:31 crc kubenswrapper[4799]: I0216 13:17:31.500674 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhx9b"] Feb 16 13:17:32 crc kubenswrapper[4799]: I0216 13:17:32.771656 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7djg"] Feb 16 13:17:32 crc kubenswrapper[4799]: I0216 13:17:32.771915 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7djg" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" containerID="cri-o://eb7de8849194d57a339e057ea6e74575c1f5fbcb3bcd8c71fdf31e434eca4bd5" gracePeriod=2 Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.184053 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc89a2f7-851b-473a-818c-db718947d490" path="/var/lib/kubelet/pods/fc89a2f7-851b-473a-818c-db718947d490/volumes" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.200722 4799 generic.go:334] "Generic (PLEG): container finished" podID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerID="eb7de8849194d57a339e057ea6e74575c1f5fbcb3bcd8c71fdf31e434eca4bd5" exitCode=0 Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.200769 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerDied","Data":"eb7de8849194d57a339e057ea6e74575c1f5fbcb3bcd8c71fdf31e434eca4bd5"} Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.356257 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.433999 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5r8d\" (UniqueName: \"kubernetes.io/projected/6764a7eb-f0f0-4145-877a-2dbc93eeb442-kube-api-access-m5r8d\") pod \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.434208 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-catalog-content\") pod \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.434319 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-utilities\") pod \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\" (UID: \"6764a7eb-f0f0-4145-877a-2dbc93eeb442\") " Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.434939 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-utilities" (OuterVolumeSpecName: "utilities") pod "6764a7eb-f0f0-4145-877a-2dbc93eeb442" (UID: "6764a7eb-f0f0-4145-877a-2dbc93eeb442"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.435074 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.440883 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6764a7eb-f0f0-4145-877a-2dbc93eeb442-kube-api-access-m5r8d" (OuterVolumeSpecName: "kube-api-access-m5r8d") pod "6764a7eb-f0f0-4145-877a-2dbc93eeb442" (UID: "6764a7eb-f0f0-4145-877a-2dbc93eeb442"). InnerVolumeSpecName "kube-api-access-m5r8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.536904 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5r8d\" (UniqueName: \"kubernetes.io/projected/6764a7eb-f0f0-4145-877a-2dbc93eeb442-kube-api-access-m5r8d\") on node \"crc\" DevicePath \"\"" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.583968 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6764a7eb-f0f0-4145-877a-2dbc93eeb442" (UID: "6764a7eb-f0f0-4145-877a-2dbc93eeb442"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:17:33 crc kubenswrapper[4799]: I0216 13:17:33.638582 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6764a7eb-f0f0-4145-877a-2dbc93eeb442-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.213652 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7djg" event={"ID":"6764a7eb-f0f0-4145-877a-2dbc93eeb442","Type":"ContainerDied","Data":"29e668f14ca897048b1b7d9ae2773065b6c20969a97955809e38c7367d0f28f2"} Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.213704 4799 scope.go:117] "RemoveContainer" containerID="eb7de8849194d57a339e057ea6e74575c1f5fbcb3bcd8c71fdf31e434eca4bd5" Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.213713 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7djg" Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.236689 4799 scope.go:117] "RemoveContainer" containerID="064436bd67494b05852f3989070e2c5facae0c5be4abd30f7fe9f6456201d639" Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.255264 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7djg"] Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.272738 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7djg"] Feb 16 13:17:34 crc kubenswrapper[4799]: I0216 13:17:34.285686 4799 scope.go:117] "RemoveContainer" containerID="1b63c45b91b5bd3eda52e3e06d372d8b57e991d434156cd30514c065004d8911" Feb 16 13:17:35 crc kubenswrapper[4799]: I0216 13:17:35.161886 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" path="/var/lib/kubelet/pods/6764a7eb-f0f0-4145-877a-2dbc93eeb442/volumes" Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.256911 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.257801 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="prometheus" containerID="cri-o://950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b" gracePeriod=600 Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.257852 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="thanos-sidecar" containerID="cri-o://345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09" gracePeriod=600 Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.257974 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="config-reloader" containerID="cri-o://c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47" gracePeriod=600 Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.511583 4799 generic.go:334] "Generic (PLEG): container finished" podID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerID="345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09" exitCode=0 Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.511612 4799 generic.go:334] "Generic (PLEG): container finished" podID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerID="950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b" exitCode=0 Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.511632 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerDied","Data":"345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09"} Feb 16 13:18:07 crc kubenswrapper[4799]: I0216 13:18:07.511659 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerDied","Data":"950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b"} Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.287717 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.419934 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-tls-assets\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.419994 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-0\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420049 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420088 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2hks\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-kube-api-access-k2hks\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420537 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420735 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420884 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420929 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-config\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420952 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-thanos-prometheus-http-client-file\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.420980 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.421019 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3dbdb842-28de-45d4-8706-54b8671c18b7-config-out\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.421038 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-1\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.421059 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-secret-combined-ca-bundle\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.421106 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-2\") pod \"3dbdb842-28de-45d4-8706-54b8671c18b7\" (UID: \"3dbdb842-28de-45d4-8706-54b8671c18b7\") " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.421717 4799 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.422809 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.423085 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.427243 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.428550 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.435045 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-config" (OuterVolumeSpecName: "config") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.435252 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.438982 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.439494 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbdb842-28de-45d4-8706-54b8671c18b7-config-out" (OuterVolumeSpecName: "config-out") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.442850 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-kube-api-access-k2hks" (OuterVolumeSpecName: "kube-api-access-k2hks") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "kube-api-access-k2hks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.446296 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.460005 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.507095 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config" (OuterVolumeSpecName: "web-config") pod "3dbdb842-28de-45d4-8706-54b8671c18b7" (UID: "3dbdb842-28de-45d4-8706-54b8671c18b7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523394 4799 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523441 4799 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523459 4799 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523477 4799 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523506 4799 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3dbdb842-28de-45d4-8706-54b8671c18b7-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523521 4799 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523537 4799 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523552 4799 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3dbdb842-28de-45d4-8706-54b8671c18b7-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523565 4799 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523577 4799 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3dbdb842-28de-45d4-8706-54b8671c18b7-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523589 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2hks\" (UniqueName: \"kubernetes.io/projected/3dbdb842-28de-45d4-8706-54b8671c18b7-kube-api-access-k2hks\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.523627 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") on node \"crc\" " Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.527534 4799 generic.go:334] "Generic (PLEG): container finished" podID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerID="c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47" exitCode=0 Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.527583 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerDied","Data":"c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47"} Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.527614 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3dbdb842-28de-45d4-8706-54b8671c18b7","Type":"ContainerDied","Data":"ae586faf6df30d64c02a242650ac74471761e1617f12e71dceaf728f5355e7a3"} Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.527634 4799 scope.go:117] "RemoveContainer" containerID="345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.527790 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.554954 4799 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.555168 4799 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd") on node "crc" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.619569 4799 scope.go:117] "RemoveContainer" containerID="c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.625747 4799 reconciler_common.go:293] "Volume detached for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.639110 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.660892 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.668549 4799 scope.go:117] "RemoveContainer" containerID="950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.685507 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687140 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="registry-server" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687173 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="registry-server" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687197 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="extract-content" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687207 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="extract-content" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687223 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="extract-utilities" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687231 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="extract-utilities" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687266 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="prometheus" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687275 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="prometheus" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687304 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="extract-utilities" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687318 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="extract-utilities" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687349 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="thanos-sidecar" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687357 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="thanos-sidecar" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687385 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="extract-content" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687394 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="extract-content" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687415 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687423 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687442 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="init-config-reloader" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687456 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="init-config-reloader" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.687477 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="config-reloader" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687487 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="config-reloader" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687964 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6764a7eb-f0f0-4145-877a-2dbc93eeb442" containerName="registry-server" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.687991 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="prometheus" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.689898 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="config-reloader" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.689926 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" containerName="thanos-sidecar" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.689960 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89a2f7-851b-473a-818c-db718947d490" containerName="registry-server" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.694678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.702773 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9r2q7" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.702827 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.703024 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.703197 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.703332 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.704277 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.702779 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.708424 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.708667 4799 scope.go:117] "RemoveContainer" containerID="c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.716455 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.759248 4799 scope.go:117] "RemoveContainer" containerID="345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.759668 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09\": container with ID starting with 345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09 not found: ID does not exist" containerID="345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.759767 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09"} err="failed to get container status \"345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09\": rpc error: code = NotFound desc = could not find container \"345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09\": container with ID starting with 345f504fbcb998d6ffb043599c515487b6bbdb8bfb8cd76174264f85cb6b4a09 not found: ID does not exist" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.759808 4799 scope.go:117] "RemoveContainer" containerID="c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.760269 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47\": container with ID starting with c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47 not found: ID does not exist" containerID="c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.760297 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47"} err="failed to get container status \"c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47\": rpc error: code = NotFound desc = could not find container \"c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47\": container with ID starting with c9abe2d1b53aa1f5492c91d383456f5bcd5e3fbecf0024374bf16a0c9113cf47 not found: ID does not exist" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.760316 4799 scope.go:117] "RemoveContainer" containerID="950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.760525 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b\": container with ID starting with 950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b not found: ID does not exist" containerID="950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.760552 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b"} err="failed to get container status \"950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b\": rpc error: code = NotFound desc = could not find container \"950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b\": container with ID starting with 950395e30e1bee5ec9c22acf6723a4f420e75ff936aaf755489a8df08f9e4f5b not found: ID does not exist" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.760569 4799 scope.go:117] "RemoveContainer" containerID="c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c" Feb 16 13:18:08 crc kubenswrapper[4799]: E0216 13:18:08.760767 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c\": container with ID starting with c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c not found: ID does not exist" containerID="c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.760795 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c"} err="failed to get container status \"c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c\": rpc error: code = NotFound desc = could not find container \"c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c\": container with ID starting with c8b64d9953b767a7ab247fd9ba2457a40cf0e549c4e5d4754e5b456056932d0c not found: ID does not exist" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833305 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833358 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833403 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833534 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-config\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833698 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833779 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833822 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833905 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c10be81f-4b62-414a-bfec-3851332ecd48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.833968 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c10be81f-4b62-414a-bfec-3851332ecd48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.834027 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.834226 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6ph\" (UniqueName: \"kubernetes.io/projected/c10be81f-4b62-414a-bfec-3851332ecd48-kube-api-access-pp6ph\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.834255 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936250 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6ph\" (UniqueName: \"kubernetes.io/projected/c10be81f-4b62-414a-bfec-3851332ecd48-kube-api-access-pp6ph\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936306 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936371 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936402 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936460 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936516 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-config\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936589 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936612 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936646 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936685 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936727 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c10be81f-4b62-414a-bfec-3851332ecd48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936760 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c10be81f-4b62-414a-bfec-3851332ecd48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.936797 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.937661 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.937959 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.938206 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c10be81f-4b62-414a-bfec-3851332ecd48-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.941968 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.942952 4799 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.942990 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d8cc6eee7369a0a6de9fc43cae4068e826e1253c0ec6fd8cae0c234b0f57b7e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.943513 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-config\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.943815 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.944915 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.945527 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c10be81f-4b62-414a-bfec-3851332ecd48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.947868 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c10be81f-4b62-414a-bfec-3851332ecd48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.948231 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.948941 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10be81f-4b62-414a-bfec-3851332ecd48-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.961154 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6ph\" (UniqueName: \"kubernetes.io/projected/c10be81f-4b62-414a-bfec-3851332ecd48-kube-api-access-pp6ph\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:08 crc kubenswrapper[4799]: I0216 13:18:08.980489 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f871dfd0-8b6f-431c-913a-4a14a62dbebd\") pod \"prometheus-metric-storage-0\" (UID: \"c10be81f-4b62-414a-bfec-3851332ecd48\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:09 crc kubenswrapper[4799]: I0216 13:18:09.023175 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:09 crc kubenswrapper[4799]: I0216 13:18:09.164576 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dbdb842-28de-45d4-8706-54b8671c18b7" path="/var/lib/kubelet/pods/3dbdb842-28de-45d4-8706-54b8671c18b7/volumes" Feb 16 13:18:09 crc kubenswrapper[4799]: I0216 13:18:09.466034 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:18:09 crc kubenswrapper[4799]: I0216 13:18:09.544615 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c10be81f-4b62-414a-bfec-3851332ecd48","Type":"ContainerStarted","Data":"a286ce2f2df49d4f43a37877b51a83cbd2ae8bb7a3eebd3e83031345148ec7ce"} Feb 16 13:18:13 crc kubenswrapper[4799]: I0216 13:18:13.580896 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c10be81f-4b62-414a-bfec-3851332ecd48","Type":"ContainerStarted","Data":"6b150529499ba5be5bea234bf59bf57f21f82c01acf0d4cd1ad171911b26a32f"} Feb 16 13:18:18 crc kubenswrapper[4799]: I0216 13:18:18.963431 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6hsl"] Feb 16 13:18:18 crc kubenswrapper[4799]: I0216 13:18:18.965874 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:18 crc kubenswrapper[4799]: I0216 13:18:18.979688 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6hsl"] Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.046820 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-catalog-content\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.047024 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45lh\" (UniqueName: \"kubernetes.io/projected/d3711e26-8f6d-4de3-9aad-5924f23370f5-kube-api-access-m45lh\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.047095 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-utilities\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.149166 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-catalog-content\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.149357 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m45lh\" (UniqueName: \"kubernetes.io/projected/d3711e26-8f6d-4de3-9aad-5924f23370f5-kube-api-access-m45lh\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.149435 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-utilities\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.149605 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-catalog-content\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.149732 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-utilities\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.171318 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m45lh\" (UniqueName: \"kubernetes.io/projected/d3711e26-8f6d-4de3-9aad-5924f23370f5-kube-api-access-m45lh\") pod \"redhat-marketplace-w6hsl\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:19 crc kubenswrapper[4799]: I0216 13:18:19.288276 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:20 crc kubenswrapper[4799]: I0216 13:18:20.382572 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6hsl"] Feb 16 13:18:20 crc kubenswrapper[4799]: I0216 13:18:20.657649 4799 generic.go:334] "Generic (PLEG): container finished" podID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerID="a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a" exitCode=0 Feb 16 13:18:20 crc kubenswrapper[4799]: I0216 13:18:20.657778 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6hsl" event={"ID":"d3711e26-8f6d-4de3-9aad-5924f23370f5","Type":"ContainerDied","Data":"a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a"} Feb 16 13:18:20 crc kubenswrapper[4799]: I0216 13:18:20.657812 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6hsl" event={"ID":"d3711e26-8f6d-4de3-9aad-5924f23370f5","Type":"ContainerStarted","Data":"eb0dde8a62b17f8b8e4b4afd948a15d511a844f49686e3cb1bec8abd4ba88d16"} Feb 16 13:18:20 crc kubenswrapper[4799]: I0216 13:18:20.662515 4799 generic.go:334] "Generic (PLEG): container finished" podID="c10be81f-4b62-414a-bfec-3851332ecd48" containerID="6b150529499ba5be5bea234bf59bf57f21f82c01acf0d4cd1ad171911b26a32f" exitCode=0 Feb 16 13:18:20 crc kubenswrapper[4799]: I0216 13:18:20.662572 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c10be81f-4b62-414a-bfec-3851332ecd48","Type":"ContainerDied","Data":"6b150529499ba5be5bea234bf59bf57f21f82c01acf0d4cd1ad171911b26a32f"} Feb 16 13:18:21 crc kubenswrapper[4799]: I0216 13:18:21.682208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c10be81f-4b62-414a-bfec-3851332ecd48","Type":"ContainerStarted","Data":"f2c1418a36d1c345d65e03928ae1b8d62bd8b774ee5d98c170917310489b63bb"} Feb 16 13:18:22 crc kubenswrapper[4799]: I0216 13:18:22.693275 4799 generic.go:334] "Generic (PLEG): container finished" podID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerID="b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe" exitCode=0 Feb 16 13:18:22 crc kubenswrapper[4799]: I0216 13:18:22.693333 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6hsl" event={"ID":"d3711e26-8f6d-4de3-9aad-5924f23370f5","Type":"ContainerDied","Data":"b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe"} Feb 16 13:18:23 crc kubenswrapper[4799]: I0216 13:18:23.709614 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6hsl" event={"ID":"d3711e26-8f6d-4de3-9aad-5924f23370f5","Type":"ContainerStarted","Data":"6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2"} Feb 16 13:18:23 crc kubenswrapper[4799]: I0216 13:18:23.731894 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6hsl" podStartSLOduration=3.222055632 podStartE2EDuration="5.731877485s" podCreationTimestamp="2026-02-16 13:18:18 +0000 UTC" firstStartedPulling="2026-02-16 13:18:20.660050668 +0000 UTC m=+2806.253066002" lastFinishedPulling="2026-02-16 13:18:23.169872521 +0000 UTC m=+2808.762887855" observedRunningTime="2026-02-16 13:18:23.729569469 +0000 UTC m=+2809.322584803" watchObservedRunningTime="2026-02-16 13:18:23.731877485 +0000 UTC m=+2809.324892819" Feb 16 13:18:24 crc kubenswrapper[4799]: I0216 13:18:24.723007 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c10be81f-4b62-414a-bfec-3851332ecd48","Type":"ContainerStarted","Data":"7f026cb9e5fd682340bec4e226ecebf37785faf15bea69f4f59d842f87b3649f"} Feb 16 13:18:24 crc kubenswrapper[4799]: I0216 13:18:24.723364 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c10be81f-4b62-414a-bfec-3851332ecd48","Type":"ContainerStarted","Data":"c74b2bcab77f8d492219ac6e5d37e21047ac39e281b691edf04fd0c8a9e4ac78"} Feb 16 13:18:24 crc kubenswrapper[4799]: I0216 13:18:24.754311 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.754292288 podStartE2EDuration="16.754292288s" podCreationTimestamp="2026-02-16 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:18:24.750800318 +0000 UTC m=+2810.343815642" watchObservedRunningTime="2026-02-16 13:18:24.754292288 +0000 UTC m=+2810.347307612" Feb 16 13:18:29 crc kubenswrapper[4799]: I0216 13:18:29.024282 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:29 crc kubenswrapper[4799]: I0216 13:18:29.288346 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:29 crc kubenswrapper[4799]: I0216 13:18:29.288655 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:29 crc kubenswrapper[4799]: I0216 13:18:29.338001 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:29 crc kubenswrapper[4799]: I0216 13:18:29.825380 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:29 crc kubenswrapper[4799]: I0216 13:18:29.892886 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6hsl"] Feb 16 13:18:31 crc kubenswrapper[4799]: I0216 13:18:31.788801 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w6hsl" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="registry-server" containerID="cri-o://6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2" gracePeriod=2 Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.376772 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.546776 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-catalog-content\") pod \"d3711e26-8f6d-4de3-9aad-5924f23370f5\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.546886 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m45lh\" (UniqueName: \"kubernetes.io/projected/d3711e26-8f6d-4de3-9aad-5924f23370f5-kube-api-access-m45lh\") pod \"d3711e26-8f6d-4de3-9aad-5924f23370f5\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.547102 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-utilities\") pod \"d3711e26-8f6d-4de3-9aad-5924f23370f5\" (UID: \"d3711e26-8f6d-4de3-9aad-5924f23370f5\") " Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.548090 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-utilities" (OuterVolumeSpecName: "utilities") pod "d3711e26-8f6d-4de3-9aad-5924f23370f5" (UID: "d3711e26-8f6d-4de3-9aad-5924f23370f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.552848 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3711e26-8f6d-4de3-9aad-5924f23370f5-kube-api-access-m45lh" (OuterVolumeSpecName: "kube-api-access-m45lh") pod "d3711e26-8f6d-4de3-9aad-5924f23370f5" (UID: "d3711e26-8f6d-4de3-9aad-5924f23370f5"). InnerVolumeSpecName "kube-api-access-m45lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.597061 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3711e26-8f6d-4de3-9aad-5924f23370f5" (UID: "d3711e26-8f6d-4de3-9aad-5924f23370f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.649435 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m45lh\" (UniqueName: \"kubernetes.io/projected/d3711e26-8f6d-4de3-9aad-5924f23370f5-kube-api-access-m45lh\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.649660 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.649747 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3711e26-8f6d-4de3-9aad-5924f23370f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.797940 4799 generic.go:334] "Generic (PLEG): container finished" podID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerID="6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2" exitCode=0 Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.797979 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6hsl" event={"ID":"d3711e26-8f6d-4de3-9aad-5924f23370f5","Type":"ContainerDied","Data":"6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2"} Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.798007 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6hsl" event={"ID":"d3711e26-8f6d-4de3-9aad-5924f23370f5","Type":"ContainerDied","Data":"eb0dde8a62b17f8b8e4b4afd948a15d511a844f49686e3cb1bec8abd4ba88d16"} Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.798027 4799 scope.go:117] "RemoveContainer" containerID="6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.798027 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6hsl" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.819485 4799 scope.go:117] "RemoveContainer" containerID="b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.832514 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6hsl"] Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.841010 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6hsl"] Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.861861 4799 scope.go:117] "RemoveContainer" containerID="a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.892541 4799 scope.go:117] "RemoveContainer" containerID="6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2" Feb 16 13:18:32 crc kubenswrapper[4799]: E0216 13:18:32.892887 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2\": container with ID starting with 6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2 not found: ID does not exist" containerID="6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.892914 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2"} err="failed to get container status \"6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2\": rpc error: code = NotFound desc = could not find container \"6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2\": container with ID starting with 6187f3bb250ee0d059d9ac979137a07a8a7a16f066141ea64ae2c1b626c751f2 not found: ID does not exist" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.892934 4799 scope.go:117] "RemoveContainer" containerID="b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe" Feb 16 13:18:32 crc kubenswrapper[4799]: E0216 13:18:32.893291 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe\": container with ID starting with b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe not found: ID does not exist" containerID="b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.893310 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe"} err="failed to get container status \"b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe\": rpc error: code = NotFound desc = could not find container \"b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe\": container with ID starting with b3a8bb4d2b58ab215bbe2e1b00b502bdbe37e8a0c7bea034d8bf817f928c33fe not found: ID does not exist" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.893323 4799 scope.go:117] "RemoveContainer" containerID="a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a" Feb 16 13:18:32 crc kubenswrapper[4799]: E0216 13:18:32.893697 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a\": container with ID starting with a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a not found: ID does not exist" containerID="a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a" Feb 16 13:18:32 crc kubenswrapper[4799]: I0216 13:18:32.893720 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a"} err="failed to get container status \"a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a\": rpc error: code = NotFound desc = could not find container \"a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a\": container with ID starting with a0584966e65eb52f741b3afa8be95f6c1a51ce56e94159fa00e7ba9c512eab8a not found: ID does not exist" Feb 16 13:18:33 crc kubenswrapper[4799]: I0216 13:18:33.165205 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" path="/var/lib/kubelet/pods/d3711e26-8f6d-4de3-9aad-5924f23370f5/volumes" Feb 16 13:18:39 crc kubenswrapper[4799]: I0216 13:18:39.023478 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:39 crc kubenswrapper[4799]: I0216 13:18:39.029659 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:39 crc kubenswrapper[4799]: I0216 13:18:39.883210 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.934629 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:18:58 crc kubenswrapper[4799]: E0216 13:18:58.935662 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="extract-utilities" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.935676 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="extract-utilities" Feb 16 13:18:58 crc kubenswrapper[4799]: E0216 13:18:58.935697 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="registry-server" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.935703 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="registry-server" Feb 16 13:18:58 crc kubenswrapper[4799]: E0216 13:18:58.935717 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="extract-content" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.935724 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="extract-content" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.935930 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3711e26-8f6d-4de3-9aad-5924f23370f5" containerName="registry-server" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.936717 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.939664 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.939774 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.940147 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.940275 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zhw5r" Feb 16 13:18:58 crc kubenswrapper[4799]: I0216 13:18:58.947619 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.003934 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004003 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-config-data\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004061 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004213 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004246 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqtg7\" (UniqueName: \"kubernetes.io/projected/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-kube-api-access-pqtg7\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004292 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004365 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004389 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.004419 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106033 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106093 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqtg7\" (UniqueName: \"kubernetes.io/projected/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-kube-api-access-pqtg7\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106116 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106165 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106188 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106209 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106316 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-config-data\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106476 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106511 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106651 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.106738 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.107661 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.107693 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-config-data\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.113795 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.116065 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.122481 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.127797 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqtg7\" (UniqueName: \"kubernetes.io/projected/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-kube-api-access-pqtg7\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.160192 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.266106 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:18:59 crc kubenswrapper[4799]: I0216 13:18:59.713860 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:19:00 crc kubenswrapper[4799]: I0216 13:19:00.082358 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd","Type":"ContainerStarted","Data":"915b3a462290921fad9deba60ec8e0e496a05c41c528c4e1596be91697adb44d"} Feb 16 13:19:11 crc kubenswrapper[4799]: I0216 13:19:11.203049 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd","Type":"ContainerStarted","Data":"a9a6e13c0a18bdc351ebd3beaf74596ed2b51864d1f79982a126a39cbcca41bb"} Feb 16 13:19:21 crc kubenswrapper[4799]: I0216 13:19:21.793367 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:19:21 crc kubenswrapper[4799]: I0216 13:19:21.794026 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:19:51 crc kubenswrapper[4799]: I0216 13:19:51.792945 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:19:51 crc kubenswrapper[4799]: I0216 13:19:51.793600 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:20:21 crc kubenswrapper[4799]: I0216 13:20:21.792851 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:20:21 crc kubenswrapper[4799]: I0216 13:20:21.793382 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:20:21 crc kubenswrapper[4799]: I0216 13:20:21.793440 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:20:21 crc kubenswrapper[4799]: I0216 13:20:21.794249 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f455c79e14fa1b0be07e059ec5a15012005a44a11fd4803ca25a5d892387d70"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:20:21 crc kubenswrapper[4799]: I0216 13:20:21.794305 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://0f455c79e14fa1b0be07e059ec5a15012005a44a11fd4803ca25a5d892387d70" gracePeriod=600 Feb 16 13:20:22 crc kubenswrapper[4799]: I0216 13:20:22.196076 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="0f455c79e14fa1b0be07e059ec5a15012005a44a11fd4803ca25a5d892387d70" exitCode=0 Feb 16 13:20:22 crc kubenswrapper[4799]: I0216 13:20:22.196145 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"0f455c79e14fa1b0be07e059ec5a15012005a44a11fd4803ca25a5d892387d70"} Feb 16 13:20:22 crc kubenswrapper[4799]: I0216 13:20:22.196424 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25"} Feb 16 13:20:22 crc kubenswrapper[4799]: I0216 13:20:22.196445 4799 scope.go:117] "RemoveContainer" containerID="861ebe27892d3575a11057c04dc9e3457b247729e6c476340d79612f81eda542" Feb 16 13:20:22 crc kubenswrapper[4799]: I0216 13:20:22.222521 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=75.145020977 podStartE2EDuration="1m25.222503905s" podCreationTimestamp="2026-02-16 13:18:57 +0000 UTC" firstStartedPulling="2026-02-16 13:18:59.713961798 +0000 UTC m=+2845.306977132" lastFinishedPulling="2026-02-16 13:19:09.791444726 +0000 UTC m=+2855.384460060" observedRunningTime="2026-02-16 13:19:11.225826122 +0000 UTC m=+2856.818841456" watchObservedRunningTime="2026-02-16 13:20:22.222503905 +0000 UTC m=+2927.815519239" Feb 16 13:22:51 crc kubenswrapper[4799]: I0216 13:22:51.792743 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:22:51 crc kubenswrapper[4799]: I0216 13:22:51.793367 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:23:21 crc kubenswrapper[4799]: I0216 13:23:21.793031 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:23:21 crc kubenswrapper[4799]: I0216 13:23:21.793616 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:23:51 crc kubenswrapper[4799]: I0216 13:23:51.792945 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:23:51 crc kubenswrapper[4799]: I0216 13:23:51.794347 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:23:51 crc kubenswrapper[4799]: I0216 13:23:51.794418 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:23:51 crc kubenswrapper[4799]: I0216 13:23:51.795254 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:23:51 crc kubenswrapper[4799]: I0216 13:23:51.795324 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" gracePeriod=600 Feb 16 13:23:51 crc kubenswrapper[4799]: E0216 13:23:51.917228 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:23:52 crc kubenswrapper[4799]: I0216 13:23:52.215576 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" exitCode=0 Feb 16 13:23:52 crc kubenswrapper[4799]: I0216 13:23:52.215635 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25"} Feb 16 13:23:52 crc kubenswrapper[4799]: I0216 13:23:52.215677 4799 scope.go:117] "RemoveContainer" containerID="0f455c79e14fa1b0be07e059ec5a15012005a44a11fd4803ca25a5d892387d70" Feb 16 13:23:52 crc kubenswrapper[4799]: I0216 13:23:52.216623 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:23:52 crc kubenswrapper[4799]: E0216 13:23:52.217120 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:24:03 crc kubenswrapper[4799]: I0216 13:24:03.167306 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:24:03 crc kubenswrapper[4799]: E0216 13:24:03.168348 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:24:17 crc kubenswrapper[4799]: I0216 13:24:17.150232 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:24:17 crc kubenswrapper[4799]: E0216 13:24:17.150960 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:24:29 crc kubenswrapper[4799]: I0216 13:24:29.149956 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:24:29 crc kubenswrapper[4799]: E0216 13:24:29.150736 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:24:44 crc kubenswrapper[4799]: I0216 13:24:44.150253 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:24:44 crc kubenswrapper[4799]: E0216 13:24:44.151225 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:24:59 crc kubenswrapper[4799]: I0216 13:24:59.150645 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:24:59 crc kubenswrapper[4799]: E0216 13:24:59.151556 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:25:11 crc kubenswrapper[4799]: I0216 13:25:11.149794 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:25:11 crc kubenswrapper[4799]: E0216 13:25:11.150662 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:25:25 crc kubenswrapper[4799]: I0216 13:25:25.151525 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:25:25 crc kubenswrapper[4799]: E0216 13:25:25.152413 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:25:38 crc kubenswrapper[4799]: I0216 13:25:38.149483 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:25:38 crc kubenswrapper[4799]: E0216 13:25:38.150394 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:25:51 crc kubenswrapper[4799]: I0216 13:25:51.150393 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:25:51 crc kubenswrapper[4799]: E0216 13:25:51.151346 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:26:06 crc kubenswrapper[4799]: I0216 13:26:06.149844 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:26:06 crc kubenswrapper[4799]: E0216 13:26:06.150719 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:26:19 crc kubenswrapper[4799]: I0216 13:26:19.149992 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:26:19 crc kubenswrapper[4799]: E0216 13:26:19.150848 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:26:34 crc kubenswrapper[4799]: I0216 13:26:34.150243 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:26:34 crc kubenswrapper[4799]: E0216 13:26:34.151004 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:26:47 crc kubenswrapper[4799]: I0216 13:26:47.149153 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:26:47 crc kubenswrapper[4799]: E0216 13:26:47.149979 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:26:59 crc kubenswrapper[4799]: I0216 13:26:59.150552 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:26:59 crc kubenswrapper[4799]: E0216 13:26:59.151534 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.237970 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dfzxx"] Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.246082 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.253809 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfzxx"] Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.367022 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr92s\" (UniqueName: \"kubernetes.io/projected/7dfccea7-5784-4759-b1f3-439b73e7ba5a-kube-api-access-qr92s\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.367114 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-utilities\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.367220 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-catalog-content\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.470065 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-utilities\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.470181 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-catalog-content\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.470483 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr92s\" (UniqueName: \"kubernetes.io/projected/7dfccea7-5784-4759-b1f3-439b73e7ba5a-kube-api-access-qr92s\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.470749 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-catalog-content\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.470918 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-utilities\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.499316 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr92s\" (UniqueName: \"kubernetes.io/projected/7dfccea7-5784-4759-b1f3-439b73e7ba5a-kube-api-access-qr92s\") pod \"certified-operators-dfzxx\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:02 crc kubenswrapper[4799]: I0216 13:27:02.577412 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:03 crc kubenswrapper[4799]: I0216 13:27:03.102146 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfzxx"] Feb 16 13:27:04 crc kubenswrapper[4799]: I0216 13:27:04.093167 4799 generic.go:334] "Generic (PLEG): container finished" podID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerID="355be7772ea1885ca79f3537684611477b0f9e8509280f5b62821ac932e3ec2c" exitCode=0 Feb 16 13:27:04 crc kubenswrapper[4799]: I0216 13:27:04.093315 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerDied","Data":"355be7772ea1885ca79f3537684611477b0f9e8509280f5b62821ac932e3ec2c"} Feb 16 13:27:04 crc kubenswrapper[4799]: I0216 13:27:04.093800 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerStarted","Data":"38d570aeaf25f2d3c57a773c1adedd93efb076a687ae53ceae779a92e8de1e14"} Feb 16 13:27:04 crc kubenswrapper[4799]: I0216 13:27:04.096845 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:27:05 crc kubenswrapper[4799]: I0216 13:27:05.110197 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerStarted","Data":"eaccf9da355506f1f687213d60333ee6904454f00a2029cad5c093f4be5bd680"} Feb 16 13:27:07 crc kubenswrapper[4799]: I0216 13:27:07.132148 4799 generic.go:334] "Generic (PLEG): container finished" podID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerID="eaccf9da355506f1f687213d60333ee6904454f00a2029cad5c093f4be5bd680" exitCode=0 Feb 16 13:27:07 crc kubenswrapper[4799]: I0216 13:27:07.132230 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerDied","Data":"eaccf9da355506f1f687213d60333ee6904454f00a2029cad5c093f4be5bd680"} Feb 16 13:27:08 crc kubenswrapper[4799]: I0216 13:27:08.144549 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerStarted","Data":"d83fcc2f11c33bf581a0bd54eb332fd1f80ad29e7417dfa50558d268620b3014"} Feb 16 13:27:08 crc kubenswrapper[4799]: I0216 13:27:08.165625 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dfzxx" podStartSLOduration=2.6459054379999998 podStartE2EDuration="6.165602121s" podCreationTimestamp="2026-02-16 13:27:02 +0000 UTC" firstStartedPulling="2026-02-16 13:27:04.096419581 +0000 UTC m=+3329.689434915" lastFinishedPulling="2026-02-16 13:27:07.616116254 +0000 UTC m=+3333.209131598" observedRunningTime="2026-02-16 13:27:08.162079611 +0000 UTC m=+3333.755094945" watchObservedRunningTime="2026-02-16 13:27:08.165602121 +0000 UTC m=+3333.758617465" Feb 16 13:27:12 crc kubenswrapper[4799]: I0216 13:27:12.577992 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:12 crc kubenswrapper[4799]: I0216 13:27:12.578517 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:12 crc kubenswrapper[4799]: I0216 13:27:12.672340 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:13 crc kubenswrapper[4799]: I0216 13:27:13.262967 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:13 crc kubenswrapper[4799]: I0216 13:27:13.325476 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfzxx"] Feb 16 13:27:14 crc kubenswrapper[4799]: I0216 13:27:14.149616 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:27:14 crc kubenswrapper[4799]: E0216 13:27:14.150378 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:27:15 crc kubenswrapper[4799]: I0216 13:27:15.214949 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dfzxx" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="registry-server" containerID="cri-o://d83fcc2f11c33bf581a0bd54eb332fd1f80ad29e7417dfa50558d268620b3014" gracePeriod=2 Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.227673 4799 generic.go:334] "Generic (PLEG): container finished" podID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerID="d83fcc2f11c33bf581a0bd54eb332fd1f80ad29e7417dfa50558d268620b3014" exitCode=0 Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.227750 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerDied","Data":"d83fcc2f11c33bf581a0bd54eb332fd1f80ad29e7417dfa50558d268620b3014"} Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.228073 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfzxx" event={"ID":"7dfccea7-5784-4759-b1f3-439b73e7ba5a","Type":"ContainerDied","Data":"38d570aeaf25f2d3c57a773c1adedd93efb076a687ae53ceae779a92e8de1e14"} Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.228093 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d570aeaf25f2d3c57a773c1adedd93efb076a687ae53ceae779a92e8de1e14" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.297092 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.360285 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-utilities\") pod \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.360545 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-catalog-content\") pod \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.360604 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr92s\" (UniqueName: \"kubernetes.io/projected/7dfccea7-5784-4759-b1f3-439b73e7ba5a-kube-api-access-qr92s\") pod \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\" (UID: \"7dfccea7-5784-4759-b1f3-439b73e7ba5a\") " Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.361413 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-utilities" (OuterVolumeSpecName: "utilities") pod "7dfccea7-5784-4759-b1f3-439b73e7ba5a" (UID: "7dfccea7-5784-4759-b1f3-439b73e7ba5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.384446 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfccea7-5784-4759-b1f3-439b73e7ba5a-kube-api-access-qr92s" (OuterVolumeSpecName: "kube-api-access-qr92s") pod "7dfccea7-5784-4759-b1f3-439b73e7ba5a" (UID: "7dfccea7-5784-4759-b1f3-439b73e7ba5a"). InnerVolumeSpecName "kube-api-access-qr92s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.421679 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dfccea7-5784-4759-b1f3-439b73e7ba5a" (UID: "7dfccea7-5784-4759-b1f3-439b73e7ba5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.464286 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.464807 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr92s\" (UniqueName: \"kubernetes.io/projected/7dfccea7-5784-4759-b1f3-439b73e7ba5a-kube-api-access-qr92s\") on node \"crc\" DevicePath \"\"" Feb 16 13:27:16 crc kubenswrapper[4799]: I0216 13:27:16.464822 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfccea7-5784-4759-b1f3-439b73e7ba5a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:27:17 crc kubenswrapper[4799]: I0216 13:27:17.240543 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfzxx" Feb 16 13:27:17 crc kubenswrapper[4799]: I0216 13:27:17.270286 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfzxx"] Feb 16 13:27:17 crc kubenswrapper[4799]: I0216 13:27:17.280709 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dfzxx"] Feb 16 13:27:19 crc kubenswrapper[4799]: I0216 13:27:19.164069 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" path="/var/lib/kubelet/pods/7dfccea7-5784-4759-b1f3-439b73e7ba5a/volumes" Feb 16 13:27:28 crc kubenswrapper[4799]: I0216 13:27:28.149616 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:27:28 crc kubenswrapper[4799]: E0216 13:27:28.150577 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.619410 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9gxh9"] Feb 16 13:27:38 crc kubenswrapper[4799]: E0216 13:27:38.620510 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="extract-utilities" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.620529 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="extract-utilities" Feb 16 13:27:38 crc kubenswrapper[4799]: E0216 13:27:38.620565 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="registry-server" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.620572 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="registry-server" Feb 16 13:27:38 crc kubenswrapper[4799]: E0216 13:27:38.620592 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="extract-content" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.620598 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="extract-content" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.620804 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfccea7-5784-4759-b1f3-439b73e7ba5a" containerName="registry-server" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.622246 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.629068 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gxh9"] Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.708105 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-utilities\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.708267 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-catalog-content\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.708309 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2dv\" (UniqueName: \"kubernetes.io/projected/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-kube-api-access-cl2dv\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.811207 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-catalog-content\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.811289 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2dv\" (UniqueName: \"kubernetes.io/projected/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-kube-api-access-cl2dv\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.811478 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-utilities\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.811757 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-catalog-content\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.811811 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-utilities\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.838472 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2dv\" (UniqueName: \"kubernetes.io/projected/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-kube-api-access-cl2dv\") pod \"redhat-operators-9gxh9\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:38 crc kubenswrapper[4799]: I0216 13:27:38.941302 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:39 crc kubenswrapper[4799]: I0216 13:27:39.532418 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gxh9"] Feb 16 13:27:40 crc kubenswrapper[4799]: I0216 13:27:40.149143 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:27:40 crc kubenswrapper[4799]: E0216 13:27:40.149665 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:27:40 crc kubenswrapper[4799]: I0216 13:27:40.509039 4799 generic.go:334] "Generic (PLEG): container finished" podID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerID="8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d" exitCode=0 Feb 16 13:27:40 crc kubenswrapper[4799]: I0216 13:27:40.509265 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerDied","Data":"8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d"} Feb 16 13:27:40 crc kubenswrapper[4799]: I0216 13:27:40.509380 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerStarted","Data":"601b0bddf3e7490e114350e31c740cd7f269bacff61962774a1407c42d655067"} Feb 16 13:27:43 crc kubenswrapper[4799]: I0216 13:27:43.553495 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerStarted","Data":"c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c"} Feb 16 13:27:47 crc kubenswrapper[4799]: I0216 13:27:47.594603 4799 generic.go:334] "Generic (PLEG): container finished" podID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerID="c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c" exitCode=0 Feb 16 13:27:47 crc kubenswrapper[4799]: I0216 13:27:47.594739 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerDied","Data":"c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c"} Feb 16 13:27:48 crc kubenswrapper[4799]: I0216 13:27:48.607958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerStarted","Data":"7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61"} Feb 16 13:27:48 crc kubenswrapper[4799]: I0216 13:27:48.626064 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9gxh9" podStartSLOduration=3.126212407 podStartE2EDuration="10.626045868s" podCreationTimestamp="2026-02-16 13:27:38 +0000 UTC" firstStartedPulling="2026-02-16 13:27:40.511039721 +0000 UTC m=+3366.104055055" lastFinishedPulling="2026-02-16 13:27:48.010873182 +0000 UTC m=+3373.603888516" observedRunningTime="2026-02-16 13:27:48.625649356 +0000 UTC m=+3374.218664690" watchObservedRunningTime="2026-02-16 13:27:48.626045868 +0000 UTC m=+3374.219061202" Feb 16 13:27:48 crc kubenswrapper[4799]: I0216 13:27:48.942386 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:48 crc kubenswrapper[4799]: I0216 13:27:48.942445 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:49 crc kubenswrapper[4799]: I0216 13:27:49.987085 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9gxh9" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="registry-server" probeResult="failure" output=< Feb 16 13:27:49 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:27:49 crc kubenswrapper[4799]: > Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.104842 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j4xn5"] Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.107761 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.140113 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j4xn5"] Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.183872 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-catalog-content\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.183984 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcz89\" (UniqueName: \"kubernetes.io/projected/d8b17ede-ddad-4472-8a39-7dfe72f01836-kube-api-access-bcz89\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.184006 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-utilities\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.287583 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-catalog-content\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.287702 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcz89\" (UniqueName: \"kubernetes.io/projected/d8b17ede-ddad-4472-8a39-7dfe72f01836-kube-api-access-bcz89\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.287733 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-utilities\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.288472 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-utilities\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.288757 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-catalog-content\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.335449 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcz89\" (UniqueName: \"kubernetes.io/projected/d8b17ede-ddad-4472-8a39-7dfe72f01836-kube-api-access-bcz89\") pod \"community-operators-j4xn5\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.436727 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:27:50 crc kubenswrapper[4799]: I0216 13:27:50.928319 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j4xn5"] Feb 16 13:27:51 crc kubenswrapper[4799]: I0216 13:27:51.149840 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:27:51 crc kubenswrapper[4799]: E0216 13:27:51.150501 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:27:51 crc kubenswrapper[4799]: I0216 13:27:51.643354 4799 generic.go:334] "Generic (PLEG): container finished" podID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerID="eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f" exitCode=0 Feb 16 13:27:51 crc kubenswrapper[4799]: I0216 13:27:51.643411 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerDied","Data":"eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f"} Feb 16 13:27:51 crc kubenswrapper[4799]: I0216 13:27:51.643620 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerStarted","Data":"0cfc4cbbbf7ebf24d68f404306996aa067ab803c3802ce3baaa8bea116cdd44d"} Feb 16 13:27:52 crc kubenswrapper[4799]: I0216 13:27:52.654456 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerStarted","Data":"05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc"} Feb 16 13:27:53 crc kubenswrapper[4799]: I0216 13:27:53.665219 4799 generic.go:334] "Generic (PLEG): container finished" podID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerID="05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc" exitCode=0 Feb 16 13:27:53 crc kubenswrapper[4799]: I0216 13:27:53.665316 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerDied","Data":"05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc"} Feb 16 13:27:54 crc kubenswrapper[4799]: I0216 13:27:54.679325 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerStarted","Data":"dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54"} Feb 16 13:27:54 crc kubenswrapper[4799]: I0216 13:27:54.710307 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j4xn5" podStartSLOduration=2.3183427500000002 podStartE2EDuration="4.710284207s" podCreationTimestamp="2026-02-16 13:27:50 +0000 UTC" firstStartedPulling="2026-02-16 13:27:51.645963828 +0000 UTC m=+3377.238979152" lastFinishedPulling="2026-02-16 13:27:54.037905275 +0000 UTC m=+3379.630920609" observedRunningTime="2026-02-16 13:27:54.700012414 +0000 UTC m=+3380.293027758" watchObservedRunningTime="2026-02-16 13:27:54.710284207 +0000 UTC m=+3380.303299541" Feb 16 13:27:58 crc kubenswrapper[4799]: I0216 13:27:58.987594 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:59 crc kubenswrapper[4799]: I0216 13:27:59.036153 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:27:59 crc kubenswrapper[4799]: I0216 13:27:59.878326 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gxh9"] Feb 16 13:28:00 crc kubenswrapper[4799]: I0216 13:28:00.437345 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:28:00 crc kubenswrapper[4799]: I0216 13:28:00.437405 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:28:00 crc kubenswrapper[4799]: I0216 13:28:00.497737 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:28:00 crc kubenswrapper[4799]: I0216 13:28:00.732425 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9gxh9" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="registry-server" containerID="cri-o://7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61" gracePeriod=2 Feb 16 13:28:00 crc kubenswrapper[4799]: I0216 13:28:00.781590 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.241851 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.323792 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2dv\" (UniqueName: \"kubernetes.io/projected/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-kube-api-access-cl2dv\") pod \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.324395 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-utilities\") pod \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.325278 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-catalog-content\") pod \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\" (UID: \"b8ed4e2d-2756-4016-9bdc-b56d2743bf66\") " Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.325213 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-utilities" (OuterVolumeSpecName: "utilities") pod "b8ed4e2d-2756-4016-9bdc-b56d2743bf66" (UID: "b8ed4e2d-2756-4016-9bdc-b56d2743bf66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.332377 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-kube-api-access-cl2dv" (OuterVolumeSpecName: "kube-api-access-cl2dv") pod "b8ed4e2d-2756-4016-9bdc-b56d2743bf66" (UID: "b8ed4e2d-2756-4016-9bdc-b56d2743bf66"). InnerVolumeSpecName "kube-api-access-cl2dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.435855 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2dv\" (UniqueName: \"kubernetes.io/projected/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-kube-api-access-cl2dv\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.435903 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.465444 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8ed4e2d-2756-4016-9bdc-b56d2743bf66" (UID: "b8ed4e2d-2756-4016-9bdc-b56d2743bf66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.537734 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ed4e2d-2756-4016-9bdc-b56d2743bf66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.743863 4799 generic.go:334] "Generic (PLEG): container finished" podID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerID="7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61" exitCode=0 Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.744062 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerDied","Data":"7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61"} Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.744201 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gxh9" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.744400 4799 scope.go:117] "RemoveContainer" containerID="7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.744351 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gxh9" event={"ID":"b8ed4e2d-2756-4016-9bdc-b56d2743bf66","Type":"ContainerDied","Data":"601b0bddf3e7490e114350e31c740cd7f269bacff61962774a1407c42d655067"} Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.842534 4799 scope.go:117] "RemoveContainer" containerID="c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.907470 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gxh9"] Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.926954 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9gxh9"] Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.938629 4799 scope.go:117] "RemoveContainer" containerID="8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.985795 4799 scope.go:117] "RemoveContainer" containerID="7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61" Feb 16 13:28:01 crc kubenswrapper[4799]: E0216 13:28:01.986832 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61\": container with ID starting with 7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61 not found: ID does not exist" containerID="7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.986882 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61"} err="failed to get container status \"7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61\": rpc error: code = NotFound desc = could not find container \"7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61\": container with ID starting with 7d6d1c487a5fb5677216fec29bced8b60b5deabccfde348dd092e7cf2c3abe61 not found: ID does not exist" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.986915 4799 scope.go:117] "RemoveContainer" containerID="c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c" Feb 16 13:28:01 crc kubenswrapper[4799]: E0216 13:28:01.987362 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c\": container with ID starting with c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c not found: ID does not exist" containerID="c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.987415 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c"} err="failed to get container status \"c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c\": rpc error: code = NotFound desc = could not find container \"c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c\": container with ID starting with c5a9a3bfe08973a0764f9233f316f304c772a73642a77b8733f68d520807600c not found: ID does not exist" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.987432 4799 scope.go:117] "RemoveContainer" containerID="8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d" Feb 16 13:28:01 crc kubenswrapper[4799]: E0216 13:28:01.987780 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d\": container with ID starting with 8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d not found: ID does not exist" containerID="8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d" Feb 16 13:28:01 crc kubenswrapper[4799]: I0216 13:28:01.987806 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d"} err="failed to get container status \"8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d\": rpc error: code = NotFound desc = could not find container \"8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d\": container with ID starting with 8d71da6c9ac503162f98c95da5ff31a63e48bf9fbcdc5a3d412caf620433573d not found: ID does not exist" Feb 16 13:28:02 crc kubenswrapper[4799]: I0216 13:28:02.876709 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j4xn5"] Feb 16 13:28:02 crc kubenswrapper[4799]: I0216 13:28:02.877564 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j4xn5" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="registry-server" containerID="cri-o://dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54" gracePeriod=2 Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.163353 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" path="/var/lib/kubelet/pods/b8ed4e2d-2756-4016-9bdc-b56d2743bf66/volumes" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.395674 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.494004 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcz89\" (UniqueName: \"kubernetes.io/projected/d8b17ede-ddad-4472-8a39-7dfe72f01836-kube-api-access-bcz89\") pod \"d8b17ede-ddad-4472-8a39-7dfe72f01836\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.494188 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-utilities\") pod \"d8b17ede-ddad-4472-8a39-7dfe72f01836\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.494223 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-catalog-content\") pod \"d8b17ede-ddad-4472-8a39-7dfe72f01836\" (UID: \"d8b17ede-ddad-4472-8a39-7dfe72f01836\") " Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.495387 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-utilities" (OuterVolumeSpecName: "utilities") pod "d8b17ede-ddad-4472-8a39-7dfe72f01836" (UID: "d8b17ede-ddad-4472-8a39-7dfe72f01836"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.499554 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b17ede-ddad-4472-8a39-7dfe72f01836-kube-api-access-bcz89" (OuterVolumeSpecName: "kube-api-access-bcz89") pod "d8b17ede-ddad-4472-8a39-7dfe72f01836" (UID: "d8b17ede-ddad-4472-8a39-7dfe72f01836"). InnerVolumeSpecName "kube-api-access-bcz89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.543277 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8b17ede-ddad-4472-8a39-7dfe72f01836" (UID: "d8b17ede-ddad-4472-8a39-7dfe72f01836"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.596371 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.596402 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b17ede-ddad-4472-8a39-7dfe72f01836-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.596416 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcz89\" (UniqueName: \"kubernetes.io/projected/d8b17ede-ddad-4472-8a39-7dfe72f01836-kube-api-access-bcz89\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.764072 4799 generic.go:334] "Generic (PLEG): container finished" podID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerID="dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54" exitCode=0 Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.764409 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerDied","Data":"dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54"} Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.764440 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j4xn5" event={"ID":"d8b17ede-ddad-4472-8a39-7dfe72f01836","Type":"ContainerDied","Data":"0cfc4cbbbf7ebf24d68f404306996aa067ab803c3802ce3baaa8bea116cdd44d"} Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.764461 4799 scope.go:117] "RemoveContainer" containerID="dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.764582 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j4xn5" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.784384 4799 scope.go:117] "RemoveContainer" containerID="05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.803953 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j4xn5"] Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.813316 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j4xn5"] Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.820849 4799 scope.go:117] "RemoveContainer" containerID="eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.849222 4799 scope.go:117] "RemoveContainer" containerID="dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54" Feb 16 13:28:03 crc kubenswrapper[4799]: E0216 13:28:03.849558 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54\": container with ID starting with dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54 not found: ID does not exist" containerID="dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.849584 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54"} err="failed to get container status \"dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54\": rpc error: code = NotFound desc = could not find container \"dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54\": container with ID starting with dcdbfc625ef17791592846e324ddee286d50f8d2ffa70c5297ed62a08cf8fa54 not found: ID does not exist" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.849610 4799 scope.go:117] "RemoveContainer" containerID="05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc" Feb 16 13:28:03 crc kubenswrapper[4799]: E0216 13:28:03.849828 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc\": container with ID starting with 05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc not found: ID does not exist" containerID="05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.849854 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc"} err="failed to get container status \"05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc\": rpc error: code = NotFound desc = could not find container \"05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc\": container with ID starting with 05ed113ec8eada6cce436e47be9752fffca173e1556a4dfcb371db7267ce8adc not found: ID does not exist" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.849872 4799 scope.go:117] "RemoveContainer" containerID="eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f" Feb 16 13:28:03 crc kubenswrapper[4799]: E0216 13:28:03.850098 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f\": container with ID starting with eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f not found: ID does not exist" containerID="eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f" Feb 16 13:28:03 crc kubenswrapper[4799]: I0216 13:28:03.850117 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f"} err="failed to get container status \"eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f\": rpc error: code = NotFound desc = could not find container \"eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f\": container with ID starting with eeaba7172538426cb82eabc0c301388ad3a9cdc236862b260c58611fd33c935f not found: ID does not exist" Feb 16 13:28:05 crc kubenswrapper[4799]: I0216 13:28:05.155787 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:28:05 crc kubenswrapper[4799]: E0216 13:28:05.156461 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:28:05 crc kubenswrapper[4799]: I0216 13:28:05.160184 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" path="/var/lib/kubelet/pods/d8b17ede-ddad-4472-8a39-7dfe72f01836/volumes" Feb 16 13:28:16 crc kubenswrapper[4799]: I0216 13:28:16.150832 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:28:16 crc kubenswrapper[4799]: E0216 13:28:16.151675 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:28:30 crc kubenswrapper[4799]: I0216 13:28:30.149514 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:28:30 crc kubenswrapper[4799]: E0216 13:28:30.150362 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.074035 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cqgt2"] Feb 16 13:28:41 crc kubenswrapper[4799]: E0216 13:28:41.075554 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="extract-utilities" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.075577 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="extract-utilities" Feb 16 13:28:41 crc kubenswrapper[4799]: E0216 13:28:41.075599 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="extract-utilities" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.075611 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="extract-utilities" Feb 16 13:28:41 crc kubenswrapper[4799]: E0216 13:28:41.075637 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="registry-server" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.075650 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="registry-server" Feb 16 13:28:41 crc kubenswrapper[4799]: E0216 13:28:41.075688 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="extract-content" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.075700 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="extract-content" Feb 16 13:28:41 crc kubenswrapper[4799]: E0216 13:28:41.075735 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="registry-server" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.075746 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="registry-server" Feb 16 13:28:41 crc kubenswrapper[4799]: E0216 13:28:41.075771 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="extract-content" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.075783 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="extract-content" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.076117 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ed4e2d-2756-4016-9bdc-b56d2743bf66" containerName="registry-server" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.076174 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b17ede-ddad-4472-8a39-7dfe72f01836" containerName="registry-server" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.078601 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.089492 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqgt2"] Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.147971 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-catalog-content\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.148097 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln5cz\" (UniqueName: \"kubernetes.io/projected/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-kube-api-access-ln5cz\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.156165 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-utilities\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.258341 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-utilities\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.258444 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-catalog-content\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.258477 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln5cz\" (UniqueName: \"kubernetes.io/projected/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-kube-api-access-ln5cz\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.259303 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-utilities\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.259546 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-catalog-content\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.287447 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln5cz\" (UniqueName: \"kubernetes.io/projected/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-kube-api-access-ln5cz\") pod \"redhat-marketplace-cqgt2\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:41 crc kubenswrapper[4799]: I0216 13:28:41.449363 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:42 crc kubenswrapper[4799]: I0216 13:28:42.023184 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqgt2"] Feb 16 13:28:42 crc kubenswrapper[4799]: I0216 13:28:42.145773 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerStarted","Data":"091387b22147efa44c9c48207a1eeb3d4510b0fbc31569c68031f65cb1706590"} Feb 16 13:28:42 crc kubenswrapper[4799]: I0216 13:28:42.149772 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:28:42 crc kubenswrapper[4799]: E0216 13:28:42.150029 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:28:43 crc kubenswrapper[4799]: I0216 13:28:43.157141 4799 generic.go:334] "Generic (PLEG): container finished" podID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerID="51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335" exitCode=0 Feb 16 13:28:43 crc kubenswrapper[4799]: I0216 13:28:43.161347 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerDied","Data":"51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335"} Feb 16 13:28:44 crc kubenswrapper[4799]: I0216 13:28:44.172312 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerStarted","Data":"63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7"} Feb 16 13:28:45 crc kubenswrapper[4799]: I0216 13:28:45.181645 4799 generic.go:334] "Generic (PLEG): container finished" podID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerID="63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7" exitCode=0 Feb 16 13:28:45 crc kubenswrapper[4799]: I0216 13:28:45.181763 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerDied","Data":"63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7"} Feb 16 13:28:46 crc kubenswrapper[4799]: I0216 13:28:46.192708 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerStarted","Data":"68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560"} Feb 16 13:28:46 crc kubenswrapper[4799]: I0216 13:28:46.213300 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cqgt2" podStartSLOduration=2.709019929 podStartE2EDuration="5.213270869s" podCreationTimestamp="2026-02-16 13:28:41 +0000 UTC" firstStartedPulling="2026-02-16 13:28:43.159393467 +0000 UTC m=+3428.752408801" lastFinishedPulling="2026-02-16 13:28:45.663644387 +0000 UTC m=+3431.256659741" observedRunningTime="2026-02-16 13:28:46.208746489 +0000 UTC m=+3431.801761823" watchObservedRunningTime="2026-02-16 13:28:46.213270869 +0000 UTC m=+3431.806286203" Feb 16 13:28:51 crc kubenswrapper[4799]: I0216 13:28:51.449621 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:51 crc kubenswrapper[4799]: I0216 13:28:51.450469 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:51 crc kubenswrapper[4799]: I0216 13:28:51.500086 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:52 crc kubenswrapper[4799]: I0216 13:28:52.334589 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:52 crc kubenswrapper[4799]: I0216 13:28:52.395404 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqgt2"] Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.287028 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cqgt2" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="registry-server" containerID="cri-o://68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560" gracePeriod=2 Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.776338 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.871384 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-catalog-content\") pod \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.871681 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln5cz\" (UniqueName: \"kubernetes.io/projected/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-kube-api-access-ln5cz\") pod \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.871713 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-utilities\") pod \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\" (UID: \"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c\") " Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.872629 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-utilities" (OuterVolumeSpecName: "utilities") pod "df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" (UID: "df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.884709 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-kube-api-access-ln5cz" (OuterVolumeSpecName: "kube-api-access-ln5cz") pod "df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" (UID: "df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c"). InnerVolumeSpecName "kube-api-access-ln5cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.916646 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" (UID: "df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.973965 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln5cz\" (UniqueName: \"kubernetes.io/projected/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-kube-api-access-ln5cz\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.974512 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:54 crc kubenswrapper[4799]: I0216 13:28:54.974592 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.301110 4799 generic.go:334] "Generic (PLEG): container finished" podID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerID="68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560" exitCode=0 Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.301205 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerDied","Data":"68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560"} Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.301262 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqgt2" event={"ID":"df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c","Type":"ContainerDied","Data":"091387b22147efa44c9c48207a1eeb3d4510b0fbc31569c68031f65cb1706590"} Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.301288 4799 scope.go:117] "RemoveContainer" containerID="68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.302797 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqgt2" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.342696 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqgt2"] Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.348047 4799 scope.go:117] "RemoveContainer" containerID="63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.357002 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqgt2"] Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.383043 4799 scope.go:117] "RemoveContainer" containerID="51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.469048 4799 scope.go:117] "RemoveContainer" containerID="68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560" Feb 16 13:28:55 crc kubenswrapper[4799]: E0216 13:28:55.469477 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560\": container with ID starting with 68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560 not found: ID does not exist" containerID="68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.469524 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560"} err="failed to get container status \"68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560\": rpc error: code = NotFound desc = could not find container \"68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560\": container with ID starting with 68009154a7379065654a1d0dad75e839a4f8788f0b76798d37f3ba55d4064560 not found: ID does not exist" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.469557 4799 scope.go:117] "RemoveContainer" containerID="63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7" Feb 16 13:28:55 crc kubenswrapper[4799]: E0216 13:28:55.469901 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7\": container with ID starting with 63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7 not found: ID does not exist" containerID="63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.469953 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7"} err="failed to get container status \"63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7\": rpc error: code = NotFound desc = could not find container \"63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7\": container with ID starting with 63e698454005a7b7a0d3e8c8870ac0ccc48f4a3ba132f0602156123257c4a6b7 not found: ID does not exist" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.469986 4799 scope.go:117] "RemoveContainer" containerID="51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335" Feb 16 13:28:55 crc kubenswrapper[4799]: E0216 13:28:55.470305 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335\": container with ID starting with 51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335 not found: ID does not exist" containerID="51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335" Feb 16 13:28:55 crc kubenswrapper[4799]: I0216 13:28:55.470358 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335"} err="failed to get container status \"51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335\": rpc error: code = NotFound desc = could not find container \"51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335\": container with ID starting with 51e66a5106256c187cb32d90f3e198da96eb9852ca1d700947aa65258c958335 not found: ID does not exist" Feb 16 13:28:56 crc kubenswrapper[4799]: I0216 13:28:56.150340 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:28:57 crc kubenswrapper[4799]: I0216 13:28:57.161029 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" path="/var/lib/kubelet/pods/df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c/volumes" Feb 16 13:28:57 crc kubenswrapper[4799]: I0216 13:28:57.325758 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"972f30510fd5af0e41be7fa0e944dbd5fbcd90fd5289604ecd0efdb4cacf7ee2"} Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.149326 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l"] Feb 16 13:30:00 crc kubenswrapper[4799]: E0216 13:30:00.150366 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="extract-content" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.150382 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="extract-content" Feb 16 13:30:00 crc kubenswrapper[4799]: E0216 13:30:00.150407 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="registry-server" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.150412 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="registry-server" Feb 16 13:30:00 crc kubenswrapper[4799]: E0216 13:30:00.150440 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="extract-utilities" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.150446 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="extract-utilities" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.150660 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="df19ffe9-ff7f-4e57-8b86-c691e5fb3c8c" containerName="registry-server" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.151415 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.153515 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.153951 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.168456 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l"] Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.319286 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-secret-volume\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.319706 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9m4k\" (UniqueName: \"kubernetes.io/projected/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-kube-api-access-c9m4k\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.319772 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-config-volume\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.421458 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9m4k\" (UniqueName: \"kubernetes.io/projected/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-kube-api-access-c9m4k\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.421573 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-config-volume\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.421683 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-secret-volume\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.422987 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-config-volume\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.428092 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-secret-volume\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.440495 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9m4k\" (UniqueName: \"kubernetes.io/projected/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-kube-api-access-c9m4k\") pod \"collect-profiles-29520810-fwp8l\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.484614 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:00 crc kubenswrapper[4799]: I0216 13:30:00.974172 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l"] Feb 16 13:30:01 crc kubenswrapper[4799]: I0216 13:30:01.973264 4799 generic.go:334] "Generic (PLEG): container finished" podID="3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" containerID="b6518a84bb93e0ed7dfb89d21a1fb86ee9fdea536a38d31e703faf8b32fe8186" exitCode=0 Feb 16 13:30:01 crc kubenswrapper[4799]: I0216 13:30:01.973360 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" event={"ID":"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36","Type":"ContainerDied","Data":"b6518a84bb93e0ed7dfb89d21a1fb86ee9fdea536a38d31e703faf8b32fe8186"} Feb 16 13:30:01 crc kubenswrapper[4799]: I0216 13:30:01.973607 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" event={"ID":"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36","Type":"ContainerStarted","Data":"44a0c65fefa38b88c09512a5def6b7dc5290f623c35e120193eb65856d294183"} Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.394901 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.596029 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9m4k\" (UniqueName: \"kubernetes.io/projected/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-kube-api-access-c9m4k\") pod \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.596371 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-secret-volume\") pod \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.596539 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-config-volume\") pod \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\" (UID: \"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36\") " Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.597403 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-config-volume" (OuterVolumeSpecName: "config-volume") pod "3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" (UID: "3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.608487 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" (UID: "3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.610455 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-kube-api-access-c9m4k" (OuterVolumeSpecName: "kube-api-access-c9m4k") pod "3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" (UID: "3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36"). InnerVolumeSpecName "kube-api-access-c9m4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.699031 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.699080 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9m4k\" (UniqueName: \"kubernetes.io/projected/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-kube-api-access-c9m4k\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.699096 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.991470 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" event={"ID":"3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36","Type":"ContainerDied","Data":"44a0c65fefa38b88c09512a5def6b7dc5290f623c35e120193eb65856d294183"} Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.991775 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a0c65fefa38b88c09512a5def6b7dc5290f623c35e120193eb65856d294183" Feb 16 13:30:03 crc kubenswrapper[4799]: I0216 13:30:03.991834 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l" Feb 16 13:30:04 crc kubenswrapper[4799]: I0216 13:30:04.493728 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5"] Feb 16 13:30:04 crc kubenswrapper[4799]: I0216 13:30:04.502131 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-m2hr5"] Feb 16 13:30:05 crc kubenswrapper[4799]: I0216 13:30:05.183406 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28edd9b2-9413-409a-a64d-95677b269d33" path="/var/lib/kubelet/pods/28edd9b2-9413-409a-a64d-95677b269d33/volumes" Feb 16 13:30:44 crc kubenswrapper[4799]: I0216 13:30:44.238405 4799 scope.go:117] "RemoveContainer" containerID="76f7ea01883bb2cddc8876b812aaa31776902c9c1deb631e9f84351948bd1159" Feb 16 13:31:21 crc kubenswrapper[4799]: I0216 13:31:21.793489 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:31:21 crc kubenswrapper[4799]: I0216 13:31:21.794099 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:31:51 crc kubenswrapper[4799]: I0216 13:31:51.792878 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:31:51 crc kubenswrapper[4799]: I0216 13:31:51.793556 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:32:21 crc kubenswrapper[4799]: I0216 13:32:21.793012 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:32:21 crc kubenswrapper[4799]: I0216 13:32:21.793925 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:32:21 crc kubenswrapper[4799]: I0216 13:32:21.793991 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:32:21 crc kubenswrapper[4799]: I0216 13:32:21.795225 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"972f30510fd5af0e41be7fa0e944dbd5fbcd90fd5289604ecd0efdb4cacf7ee2"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:32:21 crc kubenswrapper[4799]: I0216 13:32:21.795302 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://972f30510fd5af0e41be7fa0e944dbd5fbcd90fd5289604ecd0efdb4cacf7ee2" gracePeriod=600 Feb 16 13:32:22 crc kubenswrapper[4799]: I0216 13:32:22.421758 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="972f30510fd5af0e41be7fa0e944dbd5fbcd90fd5289604ecd0efdb4cacf7ee2" exitCode=0 Feb 16 13:32:22 crc kubenswrapper[4799]: I0216 13:32:22.422086 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"972f30510fd5af0e41be7fa0e944dbd5fbcd90fd5289604ecd0efdb4cacf7ee2"} Feb 16 13:32:22 crc kubenswrapper[4799]: I0216 13:32:22.422112 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e"} Feb 16 13:32:22 crc kubenswrapper[4799]: I0216 13:32:22.422144 4799 scope.go:117] "RemoveContainer" containerID="cac922ca6885561f113aea2a41a46995771221907336f07eaa1c6999c5d3ab25" Feb 16 13:33:44 crc kubenswrapper[4799]: I0216 13:33:44.359604 4799 scope.go:117] "RemoveContainer" containerID="355be7772ea1885ca79f3537684611477b0f9e8509280f5b62821ac932e3ec2c" Feb 16 13:33:44 crc kubenswrapper[4799]: I0216 13:33:44.393758 4799 scope.go:117] "RemoveContainer" containerID="d83fcc2f11c33bf581a0bd54eb332fd1f80ad29e7417dfa50558d268620b3014" Feb 16 13:33:44 crc kubenswrapper[4799]: I0216 13:33:44.447598 4799 scope.go:117] "RemoveContainer" containerID="eaccf9da355506f1f687213d60333ee6904454f00a2029cad5c093f4be5bd680" Feb 16 13:34:26 crc kubenswrapper[4799]: I0216 13:34:26.803958 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="06ddc5ff-d6d1-4997-8763-e97603e7df10" containerName="galera" probeResult="failure" output="command timed out" Feb 16 13:34:51 crc kubenswrapper[4799]: I0216 13:34:51.884938 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:34:51 crc kubenswrapper[4799]: I0216 13:34:51.885518 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:35:21 crc kubenswrapper[4799]: I0216 13:35:21.793297 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:35:21 crc kubenswrapper[4799]: I0216 13:35:21.793774 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:35:51 crc kubenswrapper[4799]: I0216 13:35:51.792813 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:35:51 crc kubenswrapper[4799]: I0216 13:35:51.793455 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:35:51 crc kubenswrapper[4799]: I0216 13:35:51.793503 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:35:51 crc kubenswrapper[4799]: I0216 13:35:51.794311 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:35:51 crc kubenswrapper[4799]: I0216 13:35:51.794364 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" gracePeriod=600 Feb 16 13:35:51 crc kubenswrapper[4799]: E0216 13:35:51.916925 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:35:52 crc kubenswrapper[4799]: I0216 13:35:52.455335 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" exitCode=0 Feb 16 13:35:52 crc kubenswrapper[4799]: I0216 13:35:52.455403 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e"} Feb 16 13:35:52 crc kubenswrapper[4799]: I0216 13:35:52.455470 4799 scope.go:117] "RemoveContainer" containerID="972f30510fd5af0e41be7fa0e944dbd5fbcd90fd5289604ecd0efdb4cacf7ee2" Feb 16 13:35:52 crc kubenswrapper[4799]: I0216 13:35:52.456292 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:35:52 crc kubenswrapper[4799]: E0216 13:35:52.456679 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:36:07 crc kubenswrapper[4799]: I0216 13:36:07.149862 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:36:07 crc kubenswrapper[4799]: E0216 13:36:07.150947 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:36:19 crc kubenswrapper[4799]: I0216 13:36:19.149527 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:36:19 crc kubenswrapper[4799]: E0216 13:36:19.150415 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:36:32 crc kubenswrapper[4799]: I0216 13:36:32.150406 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:36:32 crc kubenswrapper[4799]: E0216 13:36:32.151243 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:36:44 crc kubenswrapper[4799]: I0216 13:36:44.150230 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:36:44 crc kubenswrapper[4799]: E0216 13:36:44.151289 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:36:56 crc kubenswrapper[4799]: I0216 13:36:56.149375 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:36:56 crc kubenswrapper[4799]: E0216 13:36:56.150272 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:37:08 crc kubenswrapper[4799]: I0216 13:37:08.149367 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:37:08 crc kubenswrapper[4799]: E0216 13:37:08.150322 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:37:22 crc kubenswrapper[4799]: I0216 13:37:22.149302 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:37:22 crc kubenswrapper[4799]: E0216 13:37:22.150200 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:37:37 crc kubenswrapper[4799]: I0216 13:37:37.150789 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:37:37 crc kubenswrapper[4799]: E0216 13:37:37.151588 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:37:49 crc kubenswrapper[4799]: I0216 13:37:49.149048 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:37:49 crc kubenswrapper[4799]: E0216 13:37:49.149834 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.779392 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-64z5t"] Feb 16 13:37:55 crc kubenswrapper[4799]: E0216 13:37:55.780650 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" containerName="collect-profiles" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.780671 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" containerName="collect-profiles" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.780950 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" containerName="collect-profiles" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.783187 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.790697 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64z5t"] Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.949915 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9gs\" (UniqueName: \"kubernetes.io/projected/5a559e23-e8c1-4800-8c59-726d02f2c716-kube-api-access-jd9gs\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.950292 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-catalog-content\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:55 crc kubenswrapper[4799]: I0216 13:37:55.950353 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-utilities\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.052354 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd9gs\" (UniqueName: \"kubernetes.io/projected/5a559e23-e8c1-4800-8c59-726d02f2c716-kube-api-access-jd9gs\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.052423 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-catalog-content\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.052491 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-utilities\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.052933 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-catalog-content\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.052983 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-utilities\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.082369 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd9gs\" (UniqueName: \"kubernetes.io/projected/5a559e23-e8c1-4800-8c59-726d02f2c716-kube-api-access-jd9gs\") pod \"certified-operators-64z5t\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.113916 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.713200 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64z5t"] Feb 16 13:37:56 crc kubenswrapper[4799]: W0216 13:37:56.716763 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a559e23_e8c1_4800_8c59_726d02f2c716.slice/crio-12735ad3b52ada4d149300e3abdcddfd62b0db2b2b45751155701e10a098f698 WatchSource:0}: Error finding container 12735ad3b52ada4d149300e3abdcddfd62b0db2b2b45751155701e10a098f698: Status 404 returned error can't find the container with id 12735ad3b52ada4d149300e3abdcddfd62b0db2b2b45751155701e10a098f698 Feb 16 13:37:56 crc kubenswrapper[4799]: I0216 13:37:56.783821 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerStarted","Data":"12735ad3b52ada4d149300e3abdcddfd62b0db2b2b45751155701e10a098f698"} Feb 16 13:37:57 crc kubenswrapper[4799]: I0216 13:37:57.794190 4799 generic.go:334] "Generic (PLEG): container finished" podID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerID="e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0" exitCode=0 Feb 16 13:37:57 crc kubenswrapper[4799]: I0216 13:37:57.794283 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerDied","Data":"e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0"} Feb 16 13:37:57 crc kubenswrapper[4799]: I0216 13:37:57.796151 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:37:58 crc kubenswrapper[4799]: I0216 13:37:58.807853 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerStarted","Data":"8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4"} Feb 16 13:38:00 crc kubenswrapper[4799]: I0216 13:38:00.825472 4799 generic.go:334] "Generic (PLEG): container finished" podID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerID="8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4" exitCode=0 Feb 16 13:38:00 crc kubenswrapper[4799]: I0216 13:38:00.825557 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerDied","Data":"8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4"} Feb 16 13:38:01 crc kubenswrapper[4799]: I0216 13:38:01.837566 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerStarted","Data":"8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9"} Feb 16 13:38:01 crc kubenswrapper[4799]: I0216 13:38:01.856830 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-64z5t" podStartSLOduration=3.4355331639999998 podStartE2EDuration="6.856812856s" podCreationTimestamp="2026-02-16 13:37:55 +0000 UTC" firstStartedPulling="2026-02-16 13:37:57.79587233 +0000 UTC m=+3983.388887674" lastFinishedPulling="2026-02-16 13:38:01.217152022 +0000 UTC m=+3986.810167366" observedRunningTime="2026-02-16 13:38:01.85348717 +0000 UTC m=+3987.446502524" watchObservedRunningTime="2026-02-16 13:38:01.856812856 +0000 UTC m=+3987.449828190" Feb 16 13:38:03 crc kubenswrapper[4799]: I0216 13:38:03.149656 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:38:03 crc kubenswrapper[4799]: E0216 13:38:03.150250 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:38:06 crc kubenswrapper[4799]: I0216 13:38:06.114479 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:38:06 crc kubenswrapper[4799]: I0216 13:38:06.114875 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:38:06 crc kubenswrapper[4799]: I0216 13:38:06.161065 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:38:06 crc kubenswrapper[4799]: I0216 13:38:06.935781 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:38:06 crc kubenswrapper[4799]: I0216 13:38:06.979215 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-64z5t"] Feb 16 13:38:08 crc kubenswrapper[4799]: I0216 13:38:08.900429 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-64z5t" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="registry-server" containerID="cri-o://8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9" gracePeriod=2 Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.455213 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.647699 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-utilities" (OuterVolumeSpecName: "utilities") pod "5a559e23-e8c1-4800-8c59-726d02f2c716" (UID: "5a559e23-e8c1-4800-8c59-726d02f2c716"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.645258 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-utilities\") pod \"5a559e23-e8c1-4800-8c59-726d02f2c716\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.648026 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-catalog-content\") pod \"5a559e23-e8c1-4800-8c59-726d02f2c716\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.649486 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd9gs\" (UniqueName: \"kubernetes.io/projected/5a559e23-e8c1-4800-8c59-726d02f2c716-kube-api-access-jd9gs\") pod \"5a559e23-e8c1-4800-8c59-726d02f2c716\" (UID: \"5a559e23-e8c1-4800-8c59-726d02f2c716\") " Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.650955 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.672404 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a559e23-e8c1-4800-8c59-726d02f2c716-kube-api-access-jd9gs" (OuterVolumeSpecName: "kube-api-access-jd9gs") pod "5a559e23-e8c1-4800-8c59-726d02f2c716" (UID: "5a559e23-e8c1-4800-8c59-726d02f2c716"). InnerVolumeSpecName "kube-api-access-jd9gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.708950 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a559e23-e8c1-4800-8c59-726d02f2c716" (UID: "5a559e23-e8c1-4800-8c59-726d02f2c716"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.752970 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd9gs\" (UniqueName: \"kubernetes.io/projected/5a559e23-e8c1-4800-8c59-726d02f2c716-kube-api-access-jd9gs\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.753017 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a559e23-e8c1-4800-8c59-726d02f2c716-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.912986 4799 generic.go:334] "Generic (PLEG): container finished" podID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerID="8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9" exitCode=0 Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.913035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerDied","Data":"8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9"} Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.913075 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64z5t" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.913098 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64z5t" event={"ID":"5a559e23-e8c1-4800-8c59-726d02f2c716","Type":"ContainerDied","Data":"12735ad3b52ada4d149300e3abdcddfd62b0db2b2b45751155701e10a098f698"} Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.913140 4799 scope.go:117] "RemoveContainer" containerID="8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.937602 4799 scope.go:117] "RemoveContainer" containerID="8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4" Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.963484 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-64z5t"] Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.974727 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-64z5t"] Feb 16 13:38:09 crc kubenswrapper[4799]: I0216 13:38:09.977334 4799 scope.go:117] "RemoveContainer" containerID="e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0" Feb 16 13:38:10 crc kubenswrapper[4799]: I0216 13:38:10.011633 4799 scope.go:117] "RemoveContainer" containerID="8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9" Feb 16 13:38:10 crc kubenswrapper[4799]: E0216 13:38:10.012535 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9\": container with ID starting with 8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9 not found: ID does not exist" containerID="8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9" Feb 16 13:38:10 crc kubenswrapper[4799]: I0216 13:38:10.012562 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9"} err="failed to get container status \"8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9\": rpc error: code = NotFound desc = could not find container \"8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9\": container with ID starting with 8c7a3be5b54adc2bbe6367de0d1713ac0fdf9b03265728a2609e587d93b562d9 not found: ID does not exist" Feb 16 13:38:10 crc kubenswrapper[4799]: I0216 13:38:10.012594 4799 scope.go:117] "RemoveContainer" containerID="8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4" Feb 16 13:38:10 crc kubenswrapper[4799]: E0216 13:38:10.012895 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4\": container with ID starting with 8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4 not found: ID does not exist" containerID="8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4" Feb 16 13:38:10 crc kubenswrapper[4799]: I0216 13:38:10.012919 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4"} err="failed to get container status \"8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4\": rpc error: code = NotFound desc = could not find container \"8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4\": container with ID starting with 8a6adfc9e3776fa465291c7a1a2a92ee664a048120e238b74a0867a2240311d4 not found: ID does not exist" Feb 16 13:38:10 crc kubenswrapper[4799]: I0216 13:38:10.012933 4799 scope.go:117] "RemoveContainer" containerID="e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0" Feb 16 13:38:10 crc kubenswrapper[4799]: E0216 13:38:10.013487 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0\": container with ID starting with e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0 not found: ID does not exist" containerID="e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0" Feb 16 13:38:10 crc kubenswrapper[4799]: I0216 13:38:10.013509 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0"} err="failed to get container status \"e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0\": rpc error: code = NotFound desc = could not find container \"e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0\": container with ID starting with e1449eb01f2f7dcf4d947dd8b04f4b7342710d9ae81202ef30fbe307e2ba07b0 not found: ID does not exist" Feb 16 13:38:11 crc kubenswrapper[4799]: I0216 13:38:11.160674 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" path="/var/lib/kubelet/pods/5a559e23-e8c1-4800-8c59-726d02f2c716/volumes" Feb 16 13:38:17 crc kubenswrapper[4799]: I0216 13:38:17.149954 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:38:17 crc kubenswrapper[4799]: E0216 13:38:17.151032 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:38:28 crc kubenswrapper[4799]: I0216 13:38:28.149033 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:38:28 crc kubenswrapper[4799]: E0216 13:38:28.149852 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:38:39 crc kubenswrapper[4799]: I0216 13:38:39.149585 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:38:39 crc kubenswrapper[4799]: E0216 13:38:39.150507 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.028555 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qzsdz"] Feb 16 13:38:51 crc kubenswrapper[4799]: E0216 13:38:51.029500 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="extract-content" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.029515 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="extract-content" Feb 16 13:38:51 crc kubenswrapper[4799]: E0216 13:38:51.029536 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="extract-utilities" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.029543 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="extract-utilities" Feb 16 13:38:51 crc kubenswrapper[4799]: E0216 13:38:51.029557 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="registry-server" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.029562 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="registry-server" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.029766 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a559e23-e8c1-4800-8c59-726d02f2c716" containerName="registry-server" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.032712 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.111853 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-utilities\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.111998 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jk5\" (UniqueName: \"kubernetes.io/projected/4767955d-cee7-4249-bc5b-63deb25f7353-kube-api-access-44jk5\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.112045 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-catalog-content\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.117794 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qzsdz"] Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.149240 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:38:51 crc kubenswrapper[4799]: E0216 13:38:51.149599 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.214768 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-catalog-content\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.214981 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-utilities\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.215085 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jk5\" (UniqueName: \"kubernetes.io/projected/4767955d-cee7-4249-bc5b-63deb25f7353-kube-api-access-44jk5\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.215843 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-catalog-content\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.216778 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-utilities\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.237648 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jk5\" (UniqueName: \"kubernetes.io/projected/4767955d-cee7-4249-bc5b-63deb25f7353-kube-api-access-44jk5\") pod \"community-operators-qzsdz\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.363149 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:38:51 crc kubenswrapper[4799]: I0216 13:38:51.922284 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qzsdz"] Feb 16 13:38:52 crc kubenswrapper[4799]: I0216 13:38:52.313546 4799 generic.go:334] "Generic (PLEG): container finished" podID="4767955d-cee7-4249-bc5b-63deb25f7353" containerID="6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92" exitCode=0 Feb 16 13:38:52 crc kubenswrapper[4799]: I0216 13:38:52.313662 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerDied","Data":"6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92"} Feb 16 13:38:52 crc kubenswrapper[4799]: I0216 13:38:52.315031 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerStarted","Data":"76d90e826be9ff26062e3a63c719b5f02d628ed29e52276036acf909ea0a6d96"} Feb 16 13:38:53 crc kubenswrapper[4799]: I0216 13:38:53.331070 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerStarted","Data":"a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2"} Feb 16 13:38:54 crc kubenswrapper[4799]: I0216 13:38:54.352076 4799 generic.go:334] "Generic (PLEG): container finished" podID="4767955d-cee7-4249-bc5b-63deb25f7353" containerID="a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2" exitCode=0 Feb 16 13:38:54 crc kubenswrapper[4799]: I0216 13:38:54.352229 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerDied","Data":"a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2"} Feb 16 13:38:55 crc kubenswrapper[4799]: I0216 13:38:55.367660 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerStarted","Data":"efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d"} Feb 16 13:38:55 crc kubenswrapper[4799]: I0216 13:38:55.394774 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qzsdz" podStartSLOduration=2.962823802 podStartE2EDuration="5.394740295s" podCreationTimestamp="2026-02-16 13:38:50 +0000 UTC" firstStartedPulling="2026-02-16 13:38:52.315663318 +0000 UTC m=+4037.908678652" lastFinishedPulling="2026-02-16 13:38:54.747579811 +0000 UTC m=+4040.340595145" observedRunningTime="2026-02-16 13:38:55.384337737 +0000 UTC m=+4040.977353081" watchObservedRunningTime="2026-02-16 13:38:55.394740295 +0000 UTC m=+4040.987755649" Feb 16 13:39:01 crc kubenswrapper[4799]: I0216 13:39:01.364015 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:39:01 crc kubenswrapper[4799]: I0216 13:39:01.364630 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:39:01 crc kubenswrapper[4799]: I0216 13:39:01.557841 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:39:01 crc kubenswrapper[4799]: I0216 13:39:01.612755 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:39:01 crc kubenswrapper[4799]: I0216 13:39:01.801694 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qzsdz"] Feb 16 13:39:02 crc kubenswrapper[4799]: I0216 13:39:02.151214 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:39:02 crc kubenswrapper[4799]: E0216 13:39:02.151525 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:39:03 crc kubenswrapper[4799]: I0216 13:39:03.443699 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qzsdz" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="registry-server" containerID="cri-o://efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d" gracePeriod=2 Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.034210 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.125271 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-utilities\") pod \"4767955d-cee7-4249-bc5b-63deb25f7353\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.125320 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jk5\" (UniqueName: \"kubernetes.io/projected/4767955d-cee7-4249-bc5b-63deb25f7353-kube-api-access-44jk5\") pod \"4767955d-cee7-4249-bc5b-63deb25f7353\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.125532 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-catalog-content\") pod \"4767955d-cee7-4249-bc5b-63deb25f7353\" (UID: \"4767955d-cee7-4249-bc5b-63deb25f7353\") " Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.127668 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-utilities" (OuterVolumeSpecName: "utilities") pod "4767955d-cee7-4249-bc5b-63deb25f7353" (UID: "4767955d-cee7-4249-bc5b-63deb25f7353"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.133923 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4767955d-cee7-4249-bc5b-63deb25f7353-kube-api-access-44jk5" (OuterVolumeSpecName: "kube-api-access-44jk5") pod "4767955d-cee7-4249-bc5b-63deb25f7353" (UID: "4767955d-cee7-4249-bc5b-63deb25f7353"). InnerVolumeSpecName "kube-api-access-44jk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.180329 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4767955d-cee7-4249-bc5b-63deb25f7353" (UID: "4767955d-cee7-4249-bc5b-63deb25f7353"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.229231 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.229277 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767955d-cee7-4249-bc5b-63deb25f7353-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.229290 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jk5\" (UniqueName: \"kubernetes.io/projected/4767955d-cee7-4249-bc5b-63deb25f7353-kube-api-access-44jk5\") on node \"crc\" DevicePath \"\"" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.466830 4799 generic.go:334] "Generic (PLEG): container finished" podID="4767955d-cee7-4249-bc5b-63deb25f7353" containerID="efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d" exitCode=0 Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.466929 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzsdz" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.466910 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerDied","Data":"efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d"} Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.467113 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzsdz" event={"ID":"4767955d-cee7-4249-bc5b-63deb25f7353","Type":"ContainerDied","Data":"76d90e826be9ff26062e3a63c719b5f02d628ed29e52276036acf909ea0a6d96"} Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.467170 4799 scope.go:117] "RemoveContainer" containerID="efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.497151 4799 scope.go:117] "RemoveContainer" containerID="a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.527640 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qzsdz"] Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.548843 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qzsdz"] Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.597244 4799 scope.go:117] "RemoveContainer" containerID="6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.627341 4799 scope.go:117] "RemoveContainer" containerID="efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d" Feb 16 13:39:04 crc kubenswrapper[4799]: E0216 13:39:04.632325 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d\": container with ID starting with efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d not found: ID does not exist" containerID="efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.632376 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d"} err="failed to get container status \"efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d\": rpc error: code = NotFound desc = could not find container \"efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d\": container with ID starting with efff61fbc85c3e92fe21dea772fba41d338320cf274fca3d505c19eecd18ca1d not found: ID does not exist" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.632408 4799 scope.go:117] "RemoveContainer" containerID="a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2" Feb 16 13:39:04 crc kubenswrapper[4799]: E0216 13:39:04.632979 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2\": container with ID starting with a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2 not found: ID does not exist" containerID="a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.633035 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2"} err="failed to get container status \"a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2\": rpc error: code = NotFound desc = could not find container \"a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2\": container with ID starting with a3c8ac8229a60a61cee21044b8a588a57654523f8b7733f246a4c9a446418bb2 not found: ID does not exist" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.633066 4799 scope.go:117] "RemoveContainer" containerID="6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92" Feb 16 13:39:04 crc kubenswrapper[4799]: E0216 13:39:04.637335 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92\": container with ID starting with 6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92 not found: ID does not exist" containerID="6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92" Feb 16 13:39:04 crc kubenswrapper[4799]: I0216 13:39:04.637425 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92"} err="failed to get container status \"6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92\": rpc error: code = NotFound desc = could not find container \"6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92\": container with ID starting with 6df3f08cdec6f6d8e9c7d2c2abf2ecd165fd5e0735732b9e7d4d1fcc58530f92 not found: ID does not exist" Feb 16 13:39:05 crc kubenswrapper[4799]: I0216 13:39:05.165932 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" path="/var/lib/kubelet/pods/4767955d-cee7-4249-bc5b-63deb25f7353/volumes" Feb 16 13:39:15 crc kubenswrapper[4799]: I0216 13:39:15.155987 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:39:15 crc kubenswrapper[4799]: E0216 13:39:15.156751 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:39:29 crc kubenswrapper[4799]: I0216 13:39:29.149113 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:39:29 crc kubenswrapper[4799]: E0216 13:39:29.150584 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:39:40 crc kubenswrapper[4799]: I0216 13:39:40.149351 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:39:40 crc kubenswrapper[4799]: E0216 13:39:40.150204 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:39:55 crc kubenswrapper[4799]: I0216 13:39:55.155811 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:39:55 crc kubenswrapper[4799]: E0216 13:39:55.157701 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:40:07 crc kubenswrapper[4799]: I0216 13:40:07.149041 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:40:07 crc kubenswrapper[4799]: E0216 13:40:07.164082 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:40:20 crc kubenswrapper[4799]: I0216 13:40:20.149943 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:40:20 crc kubenswrapper[4799]: E0216 13:40:20.150778 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:40:33 crc kubenswrapper[4799]: I0216 13:40:33.149864 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:40:33 crc kubenswrapper[4799]: E0216 13:40:33.150788 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:40:47 crc kubenswrapper[4799]: I0216 13:40:47.149921 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:40:47 crc kubenswrapper[4799]: E0216 13:40:47.151231 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:41:01 crc kubenswrapper[4799]: I0216 13:41:01.149638 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:41:01 crc kubenswrapper[4799]: I0216 13:41:01.715412 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"63732bab17ded7fd695a87e4a15088436d71d18da69be1a1b71a67855aa6359c"} Feb 16 13:43:21 crc kubenswrapper[4799]: I0216 13:43:21.792922 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:43:21 crc kubenswrapper[4799]: I0216 13:43:21.793520 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:43:51 crc kubenswrapper[4799]: I0216 13:43:51.793446 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:43:51 crc kubenswrapper[4799]: I0216 13:43:51.794863 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.793167 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.793874 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.793947 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.794917 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63732bab17ded7fd695a87e4a15088436d71d18da69be1a1b71a67855aa6359c"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.794993 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://63732bab17ded7fd695a87e4a15088436d71d18da69be1a1b71a67855aa6359c" gracePeriod=600 Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.921708 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="63732bab17ded7fd695a87e4a15088436d71d18da69be1a1b71a67855aa6359c" exitCode=0 Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.921755 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"63732bab17ded7fd695a87e4a15088436d71d18da69be1a1b71a67855aa6359c"} Feb 16 13:44:21 crc kubenswrapper[4799]: I0216 13:44:21.921804 4799 scope.go:117] "RemoveContainer" containerID="7f91b5b55347663fba1f561a2a8d1674983388ac290474733f929a5c0270e59e" Feb 16 13:44:22 crc kubenswrapper[4799]: I0216 13:44:22.932467 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b"} Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.225006 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7g5f"] Feb 16 13:44:42 crc kubenswrapper[4799]: E0216 13:44:42.226040 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="registry-server" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.226057 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="registry-server" Feb 16 13:44:42 crc kubenswrapper[4799]: E0216 13:44:42.226100 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="extract-utilities" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.226110 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="extract-utilities" Feb 16 13:44:42 crc kubenswrapper[4799]: E0216 13:44:42.226154 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="extract-content" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.226164 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="extract-content" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.226410 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="4767955d-cee7-4249-bc5b-63deb25f7353" containerName="registry-server" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.228083 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.239302 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7g5f"] Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.351681 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-catalog-content\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.351839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwj84\" (UniqueName: \"kubernetes.io/projected/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-kube-api-access-xwj84\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.351950 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-utilities\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.454787 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwj84\" (UniqueName: \"kubernetes.io/projected/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-kube-api-access-xwj84\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.454916 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-utilities\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.455088 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-catalog-content\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.455451 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-utilities\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.455766 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-catalog-content\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.482257 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwj84\" (UniqueName: \"kubernetes.io/projected/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-kube-api-access-xwj84\") pod \"redhat-marketplace-b7g5f\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:42 crc kubenswrapper[4799]: I0216 13:44:42.545323 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:43 crc kubenswrapper[4799]: I0216 13:44:43.061947 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7g5f"] Feb 16 13:44:43 crc kubenswrapper[4799]: W0216 13:44:43.063312 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6466d70_c8a0_498f_a9d3_7bedea7c0dae.slice/crio-1268694d8a04780707b2a8bded968be77fb4a19977c2c1c07b4052b4d9ed2501 WatchSource:0}: Error finding container 1268694d8a04780707b2a8bded968be77fb4a19977c2c1c07b4052b4d9ed2501: Status 404 returned error can't find the container with id 1268694d8a04780707b2a8bded968be77fb4a19977c2c1c07b4052b4d9ed2501 Feb 16 13:44:43 crc kubenswrapper[4799]: I0216 13:44:43.134042 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7g5f" event={"ID":"c6466d70-c8a0-498f-a9d3-7bedea7c0dae","Type":"ContainerStarted","Data":"1268694d8a04780707b2a8bded968be77fb4a19977c2c1c07b4052b4d9ed2501"} Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.145161 4799 generic.go:334] "Generic (PLEG): container finished" podID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerID="a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2" exitCode=0 Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.145517 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7g5f" event={"ID":"c6466d70-c8a0-498f-a9d3-7bedea7c0dae","Type":"ContainerDied","Data":"a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2"} Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.147814 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.623803 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4xsx"] Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.626073 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.640055 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4xsx"] Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.808773 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-utilities\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.809254 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-catalog-content\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.809337 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbgz\" (UniqueName: \"kubernetes.io/projected/f791e1a7-94c2-485e-81c6-508a405bb2f9-kube-api-access-clbgz\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.911456 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-utilities\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.911748 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-catalog-content\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.911928 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clbgz\" (UniqueName: \"kubernetes.io/projected/f791e1a7-94c2-485e-81c6-508a405bb2f9-kube-api-access-clbgz\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.912030 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-utilities\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.912294 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-catalog-content\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:44 crc kubenswrapper[4799]: I0216 13:44:44.932996 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbgz\" (UniqueName: \"kubernetes.io/projected/f791e1a7-94c2-485e-81c6-508a405bb2f9-kube-api-access-clbgz\") pod \"redhat-operators-x4xsx\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:45 crc kubenswrapper[4799]: I0216 13:44:45.001406 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:45 crc kubenswrapper[4799]: I0216 13:44:45.511395 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4xsx"] Feb 16 13:44:46 crc kubenswrapper[4799]: I0216 13:44:46.183318 4799 generic.go:334] "Generic (PLEG): container finished" podID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerID="2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00" exitCode=0 Feb 16 13:44:46 crc kubenswrapper[4799]: I0216 13:44:46.183376 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7g5f" event={"ID":"c6466d70-c8a0-498f-a9d3-7bedea7c0dae","Type":"ContainerDied","Data":"2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00"} Feb 16 13:44:46 crc kubenswrapper[4799]: I0216 13:44:46.186587 4799 generic.go:334] "Generic (PLEG): container finished" podID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerID="a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9" exitCode=0 Feb 16 13:44:46 crc kubenswrapper[4799]: I0216 13:44:46.186630 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerDied","Data":"a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9"} Feb 16 13:44:46 crc kubenswrapper[4799]: I0216 13:44:46.186670 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerStarted","Data":"22c4ef52043bb6d61a60b7ad89d23b63837e78b1b5529ac2c0b2535645519df7"} Feb 16 13:44:47 crc kubenswrapper[4799]: I0216 13:44:47.199587 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7g5f" event={"ID":"c6466d70-c8a0-498f-a9d3-7bedea7c0dae","Type":"ContainerStarted","Data":"ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7"} Feb 16 13:44:47 crc kubenswrapper[4799]: I0216 13:44:47.234575 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7g5f" podStartSLOduration=2.811419482 podStartE2EDuration="5.234559502s" podCreationTimestamp="2026-02-16 13:44:42 +0000 UTC" firstStartedPulling="2026-02-16 13:44:44.14762053 +0000 UTC m=+4389.740635864" lastFinishedPulling="2026-02-16 13:44:46.57076054 +0000 UTC m=+4392.163775884" observedRunningTime="2026-02-16 13:44:47.224455883 +0000 UTC m=+4392.817471227" watchObservedRunningTime="2026-02-16 13:44:47.234559502 +0000 UTC m=+4392.827574836" Feb 16 13:44:48 crc kubenswrapper[4799]: I0216 13:44:48.214084 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerStarted","Data":"890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4"} Feb 16 13:44:52 crc kubenswrapper[4799]: I0216 13:44:52.545813 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:52 crc kubenswrapper[4799]: I0216 13:44:52.546180 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:52 crc kubenswrapper[4799]: I0216 13:44:52.598242 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:53 crc kubenswrapper[4799]: I0216 13:44:53.260771 4799 generic.go:334] "Generic (PLEG): container finished" podID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerID="890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4" exitCode=0 Feb 16 13:44:53 crc kubenswrapper[4799]: I0216 13:44:53.260864 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerDied","Data":"890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4"} Feb 16 13:44:53 crc kubenswrapper[4799]: I0216 13:44:53.457766 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:54 crc kubenswrapper[4799]: I0216 13:44:54.011648 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7g5f"] Feb 16 13:44:54 crc kubenswrapper[4799]: I0216 13:44:54.272010 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerStarted","Data":"508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd"} Feb 16 13:44:54 crc kubenswrapper[4799]: I0216 13:44:54.308175 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4xsx" podStartSLOduration=2.750849598 podStartE2EDuration="10.308152205s" podCreationTimestamp="2026-02-16 13:44:44 +0000 UTC" firstStartedPulling="2026-02-16 13:44:46.189300225 +0000 UTC m=+4391.782315559" lastFinishedPulling="2026-02-16 13:44:53.746602832 +0000 UTC m=+4399.339618166" observedRunningTime="2026-02-16 13:44:54.29401008 +0000 UTC m=+4399.887025414" watchObservedRunningTime="2026-02-16 13:44:54.308152205 +0000 UTC m=+4399.901167539" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.001881 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.002390 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.280003 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b7g5f" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="registry-server" containerID="cri-o://ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7" gracePeriod=2 Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.800811 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.972475 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-utilities\") pod \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.972527 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwj84\" (UniqueName: \"kubernetes.io/projected/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-kube-api-access-xwj84\") pod \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.972867 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-catalog-content\") pod \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\" (UID: \"c6466d70-c8a0-498f-a9d3-7bedea7c0dae\") " Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.973213 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-utilities" (OuterVolumeSpecName: "utilities") pod "c6466d70-c8a0-498f-a9d3-7bedea7c0dae" (UID: "c6466d70-c8a0-498f-a9d3-7bedea7c0dae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.973629 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.988399 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-kube-api-access-xwj84" (OuterVolumeSpecName: "kube-api-access-xwj84") pod "c6466d70-c8a0-498f-a9d3-7bedea7c0dae" (UID: "c6466d70-c8a0-498f-a9d3-7bedea7c0dae"). InnerVolumeSpecName "kube-api-access-xwj84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:44:55 crc kubenswrapper[4799]: I0216 13:44:55.996999 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6466d70-c8a0-498f-a9d3-7bedea7c0dae" (UID: "c6466d70-c8a0-498f-a9d3-7bedea7c0dae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.059755 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4xsx" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="registry-server" probeResult="failure" output=< Feb 16 13:44:56 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:44:56 crc kubenswrapper[4799]: > Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.076390 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.076656 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwj84\" (UniqueName: \"kubernetes.io/projected/c6466d70-c8a0-498f-a9d3-7bedea7c0dae-kube-api-access-xwj84\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.295356 4799 generic.go:334] "Generic (PLEG): container finished" podID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerID="ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7" exitCode=0 Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.295436 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7g5f" event={"ID":"c6466d70-c8a0-498f-a9d3-7bedea7c0dae","Type":"ContainerDied","Data":"ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7"} Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.295472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7g5f" event={"ID":"c6466d70-c8a0-498f-a9d3-7bedea7c0dae","Type":"ContainerDied","Data":"1268694d8a04780707b2a8bded968be77fb4a19977c2c1c07b4052b4d9ed2501"} Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.295526 4799 scope.go:117] "RemoveContainer" containerID="ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.295753 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7g5f" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.347703 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7g5f"] Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.350210 4799 scope.go:117] "RemoveContainer" containerID="2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.357993 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7g5f"] Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.378116 4799 scope.go:117] "RemoveContainer" containerID="a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.437021 4799 scope.go:117] "RemoveContainer" containerID="ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7" Feb 16 13:44:56 crc kubenswrapper[4799]: E0216 13:44:56.437740 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7\": container with ID starting with ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7 not found: ID does not exist" containerID="ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.437787 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7"} err="failed to get container status \"ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7\": rpc error: code = NotFound desc = could not find container \"ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7\": container with ID starting with ac4b84843f5f134a480bf4281cc1cd206657b7143d0a86e91b2ab215138d7fd7 not found: ID does not exist" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.437823 4799 scope.go:117] "RemoveContainer" containerID="2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00" Feb 16 13:44:56 crc kubenswrapper[4799]: E0216 13:44:56.438115 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00\": container with ID starting with 2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00 not found: ID does not exist" containerID="2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.438158 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00"} err="failed to get container status \"2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00\": rpc error: code = NotFound desc = could not find container \"2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00\": container with ID starting with 2f4a2c9d8e3e1abb35e07f6355087e91aa18165cf64b9e76e224dc564df35f00 not found: ID does not exist" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.438179 4799 scope.go:117] "RemoveContainer" containerID="a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2" Feb 16 13:44:56 crc kubenswrapper[4799]: E0216 13:44:56.438725 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2\": container with ID starting with a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2 not found: ID does not exist" containerID="a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2" Feb 16 13:44:56 crc kubenswrapper[4799]: I0216 13:44:56.438757 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2"} err="failed to get container status \"a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2\": rpc error: code = NotFound desc = could not find container \"a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2\": container with ID starting with a3ba5edab36946c20269b890c4ce5e06d724d5c45f6205569d418cadf87ee2b2 not found: ID does not exist" Feb 16 13:44:57 crc kubenswrapper[4799]: I0216 13:44:57.163359 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" path="/var/lib/kubelet/pods/c6466d70-c8a0-498f-a9d3-7bedea7c0dae/volumes" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.190630 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8"] Feb 16 13:45:00 crc kubenswrapper[4799]: E0216 13:45:00.191773 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.191793 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4799]: E0216 13:45:00.191820 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="extract-utilities" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.191828 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="extract-utilities" Feb 16 13:45:00 crc kubenswrapper[4799]: E0216 13:45:00.191850 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="extract-content" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.191864 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="extract-content" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.192097 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6466d70-c8a0-498f-a9d3-7bedea7c0dae" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.193138 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.196839 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.201858 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.205767 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8"] Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.265236 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxc9\" (UniqueName: \"kubernetes.io/projected/f334e86b-2cb4-4edd-b419-411f9aed6bbf-kube-api-access-4nxc9\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.265418 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334e86b-2cb4-4edd-b419-411f9aed6bbf-config-volume\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.265459 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f334e86b-2cb4-4edd-b419-411f9aed6bbf-secret-volume\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.366326 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334e86b-2cb4-4edd-b419-411f9aed6bbf-config-volume\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.366408 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f334e86b-2cb4-4edd-b419-411f9aed6bbf-secret-volume\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.367551 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxc9\" (UniqueName: \"kubernetes.io/projected/f334e86b-2cb4-4edd-b419-411f9aed6bbf-kube-api-access-4nxc9\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.367581 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334e86b-2cb4-4edd-b419-411f9aed6bbf-config-volume\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.382040 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f334e86b-2cb4-4edd-b419-411f9aed6bbf-secret-volume\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.383798 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxc9\" (UniqueName: \"kubernetes.io/projected/f334e86b-2cb4-4edd-b419-411f9aed6bbf-kube-api-access-4nxc9\") pod \"collect-profiles-29520825-9xtc8\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:00 crc kubenswrapper[4799]: I0216 13:45:00.523899 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:01 crc kubenswrapper[4799]: I0216 13:45:01.202526 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8"] Feb 16 13:45:01 crc kubenswrapper[4799]: I0216 13:45:01.345338 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" event={"ID":"f334e86b-2cb4-4edd-b419-411f9aed6bbf","Type":"ContainerStarted","Data":"cc2b5bc9dfd302324b511bf5c722afea526619256b5264b2c61b1de28acb01a9"} Feb 16 13:45:02 crc kubenswrapper[4799]: I0216 13:45:02.357009 4799 generic.go:334] "Generic (PLEG): container finished" podID="f334e86b-2cb4-4edd-b419-411f9aed6bbf" containerID="cd96cef19e628a4512a46e11bc1f569117fbe06e9ae8a3f410cb2fd3a82cdd63" exitCode=0 Feb 16 13:45:02 crc kubenswrapper[4799]: I0216 13:45:02.357212 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" event={"ID":"f334e86b-2cb4-4edd-b419-411f9aed6bbf","Type":"ContainerDied","Data":"cd96cef19e628a4512a46e11bc1f569117fbe06e9ae8a3f410cb2fd3a82cdd63"} Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.766558 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.821757 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334e86b-2cb4-4edd-b419-411f9aed6bbf-config-volume\") pod \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.821993 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f334e86b-2cb4-4edd-b419-411f9aed6bbf-secret-volume\") pod \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.822057 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxc9\" (UniqueName: \"kubernetes.io/projected/f334e86b-2cb4-4edd-b419-411f9aed6bbf-kube-api-access-4nxc9\") pod \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\" (UID: \"f334e86b-2cb4-4edd-b419-411f9aed6bbf\") " Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.822991 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f334e86b-2cb4-4edd-b419-411f9aed6bbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "f334e86b-2cb4-4edd-b419-411f9aed6bbf" (UID: "f334e86b-2cb4-4edd-b419-411f9aed6bbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.838687 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f334e86b-2cb4-4edd-b419-411f9aed6bbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f334e86b-2cb4-4edd-b419-411f9aed6bbf" (UID: "f334e86b-2cb4-4edd-b419-411f9aed6bbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.838765 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f334e86b-2cb4-4edd-b419-411f9aed6bbf-kube-api-access-4nxc9" (OuterVolumeSpecName: "kube-api-access-4nxc9") pod "f334e86b-2cb4-4edd-b419-411f9aed6bbf" (UID: "f334e86b-2cb4-4edd-b419-411f9aed6bbf"). InnerVolumeSpecName "kube-api-access-4nxc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.924673 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334e86b-2cb4-4edd-b419-411f9aed6bbf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.924711 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f334e86b-2cb4-4edd-b419-411f9aed6bbf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4799]: I0216 13:45:03.924721 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nxc9\" (UniqueName: \"kubernetes.io/projected/f334e86b-2cb4-4edd-b419-411f9aed6bbf-kube-api-access-4nxc9\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:04 crc kubenswrapper[4799]: I0216 13:45:04.378423 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" event={"ID":"f334e86b-2cb4-4edd-b419-411f9aed6bbf","Type":"ContainerDied","Data":"cc2b5bc9dfd302324b511bf5c722afea526619256b5264b2c61b1de28acb01a9"} Feb 16 13:45:04 crc kubenswrapper[4799]: I0216 13:45:04.378476 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2b5bc9dfd302324b511bf5c722afea526619256b5264b2c61b1de28acb01a9" Feb 16 13:45:04 crc kubenswrapper[4799]: I0216 13:45:04.378478 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-9xtc8" Feb 16 13:45:04 crc kubenswrapper[4799]: I0216 13:45:04.848570 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k"] Feb 16 13:45:04 crc kubenswrapper[4799]: I0216 13:45:04.859204 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-kvx5k"] Feb 16 13:45:05 crc kubenswrapper[4799]: I0216 13:45:05.067446 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:45:05 crc kubenswrapper[4799]: I0216 13:45:05.119836 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:45:05 crc kubenswrapper[4799]: I0216 13:45:05.162603 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00a4eb39-b079-4053-a166-8ca7a6987683" path="/var/lib/kubelet/pods/00a4eb39-b079-4053-a166-8ca7a6987683/volumes" Feb 16 13:45:05 crc kubenswrapper[4799]: I0216 13:45:05.311788 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4xsx"] Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.397576 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4xsx" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="registry-server" containerID="cri-o://508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd" gracePeriod=2 Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.888945 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.983621 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-catalog-content\") pod \"f791e1a7-94c2-485e-81c6-508a405bb2f9\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.983918 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clbgz\" (UniqueName: \"kubernetes.io/projected/f791e1a7-94c2-485e-81c6-508a405bb2f9-kube-api-access-clbgz\") pod \"f791e1a7-94c2-485e-81c6-508a405bb2f9\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.984051 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-utilities\") pod \"f791e1a7-94c2-485e-81c6-508a405bb2f9\" (UID: \"f791e1a7-94c2-485e-81c6-508a405bb2f9\") " Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.984880 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-utilities" (OuterVolumeSpecName: "utilities") pod "f791e1a7-94c2-485e-81c6-508a405bb2f9" (UID: "f791e1a7-94c2-485e-81c6-508a405bb2f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:45:06 crc kubenswrapper[4799]: I0216 13:45:06.990730 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f791e1a7-94c2-485e-81c6-508a405bb2f9-kube-api-access-clbgz" (OuterVolumeSpecName: "kube-api-access-clbgz") pod "f791e1a7-94c2-485e-81c6-508a405bb2f9" (UID: "f791e1a7-94c2-485e-81c6-508a405bb2f9"). InnerVolumeSpecName "kube-api-access-clbgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.087658 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clbgz\" (UniqueName: \"kubernetes.io/projected/f791e1a7-94c2-485e-81c6-508a405bb2f9-kube-api-access-clbgz\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.087698 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.137385 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f791e1a7-94c2-485e-81c6-508a405bb2f9" (UID: "f791e1a7-94c2-485e-81c6-508a405bb2f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.190887 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f791e1a7-94c2-485e-81c6-508a405bb2f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.409253 4799 generic.go:334] "Generic (PLEG): container finished" podID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerID="508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd" exitCode=0 Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.409327 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerDied","Data":"508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd"} Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.409353 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4xsx" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.409387 4799 scope.go:117] "RemoveContainer" containerID="508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.409368 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4xsx" event={"ID":"f791e1a7-94c2-485e-81c6-508a405bb2f9","Type":"ContainerDied","Data":"22c4ef52043bb6d61a60b7ad89d23b63837e78b1b5529ac2c0b2535645519df7"} Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.454880 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4xsx"] Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.468416 4799 scope.go:117] "RemoveContainer" containerID="890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.471840 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4xsx"] Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.515661 4799 scope.go:117] "RemoveContainer" containerID="a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.576114 4799 scope.go:117] "RemoveContainer" containerID="508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd" Feb 16 13:45:07 crc kubenswrapper[4799]: E0216 13:45:07.577800 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd\": container with ID starting with 508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd not found: ID does not exist" containerID="508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.577839 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd"} err="failed to get container status \"508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd\": rpc error: code = NotFound desc = could not find container \"508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd\": container with ID starting with 508defdafe1719f3b55e92e220920236a6e7cb13e7addf2ea907ad9d9e3d03fd not found: ID does not exist" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.577871 4799 scope.go:117] "RemoveContainer" containerID="890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4" Feb 16 13:45:07 crc kubenswrapper[4799]: E0216 13:45:07.579330 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4\": container with ID starting with 890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4 not found: ID does not exist" containerID="890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.579382 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4"} err="failed to get container status \"890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4\": rpc error: code = NotFound desc = could not find container \"890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4\": container with ID starting with 890730ccf010be28e7533634828ed1b9e5b2e8ee29da6acb4abd99a0eb9c93d4 not found: ID does not exist" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.579410 4799 scope.go:117] "RemoveContainer" containerID="a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9" Feb 16 13:45:07 crc kubenswrapper[4799]: E0216 13:45:07.579907 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9\": container with ID starting with a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9 not found: ID does not exist" containerID="a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9" Feb 16 13:45:07 crc kubenswrapper[4799]: I0216 13:45:07.579936 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9"} err="failed to get container status \"a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9\": rpc error: code = NotFound desc = could not find container \"a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9\": container with ID starting with a331f7f8524831fb90ca37100cf9fcad80db70d485f47bc39543dbe4456a7fb9 not found: ID does not exist" Feb 16 13:45:09 crc kubenswrapper[4799]: I0216 13:45:09.164634 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" path="/var/lib/kubelet/pods/f791e1a7-94c2-485e-81c6-508a405bb2f9/volumes" Feb 16 13:45:44 crc kubenswrapper[4799]: I0216 13:45:44.738374 4799 scope.go:117] "RemoveContainer" containerID="c0a0c2cbea84d45457d4fb818ba90bddc315c588164d4d0c27ce8da7bc62ff6b" Feb 16 13:46:51 crc kubenswrapper[4799]: I0216 13:46:51.814251 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:46:51 crc kubenswrapper[4799]: I0216 13:46:51.814921 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:47:21 crc kubenswrapper[4799]: I0216 13:47:21.792985 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:47:21 crc kubenswrapper[4799]: I0216 13:47:21.793466 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.793305 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.793882 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.793926 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.794699 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.794746 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" gracePeriod=600 Feb 16 13:47:51 crc kubenswrapper[4799]: E0216 13:47:51.920290 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.971082 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" exitCode=0 Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.971137 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b"} Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.971248 4799 scope.go:117] "RemoveContainer" containerID="63732bab17ded7fd695a87e4a15088436d71d18da69be1a1b71a67855aa6359c" Feb 16 13:47:51 crc kubenswrapper[4799]: I0216 13:47:51.972052 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:47:51 crc kubenswrapper[4799]: E0216 13:47:51.972324 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:48:03 crc kubenswrapper[4799]: I0216 13:48:03.149428 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:48:03 crc kubenswrapper[4799]: E0216 13:48:03.150153 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:48:16 crc kubenswrapper[4799]: I0216 13:48:16.149087 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:48:16 crc kubenswrapper[4799]: E0216 13:48:16.149920 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:48:30 crc kubenswrapper[4799]: I0216 13:48:30.150201 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:48:30 crc kubenswrapper[4799]: E0216 13:48:30.151165 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.948233 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hxzf6"] Feb 16 13:48:34 crc kubenswrapper[4799]: E0216 13:48:34.949358 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="extract-content" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.949378 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="extract-content" Feb 16 13:48:34 crc kubenswrapper[4799]: E0216 13:48:34.949395 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="registry-server" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.949403 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="registry-server" Feb 16 13:48:34 crc kubenswrapper[4799]: E0216 13:48:34.949420 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f334e86b-2cb4-4edd-b419-411f9aed6bbf" containerName="collect-profiles" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.949428 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f334e86b-2cb4-4edd-b419-411f9aed6bbf" containerName="collect-profiles" Feb 16 13:48:34 crc kubenswrapper[4799]: E0216 13:48:34.949479 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="extract-utilities" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.949488 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="extract-utilities" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.949713 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f334e86b-2cb4-4edd-b419-411f9aed6bbf" containerName="collect-profiles" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.949746 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="f791e1a7-94c2-485e-81c6-508a405bb2f9" containerName="registry-server" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.951586 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.965159 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hxzf6"] Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.968958 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-utilities\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.969001 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dlk\" (UniqueName: \"kubernetes.io/projected/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-kube-api-access-n8dlk\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:34 crc kubenswrapper[4799]: I0216 13:48:34.969022 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-catalog-content\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.070599 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8dlk\" (UniqueName: \"kubernetes.io/projected/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-kube-api-access-n8dlk\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.070909 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-catalog-content\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.071184 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-utilities\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.071389 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-catalog-content\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.071581 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-utilities\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.105010 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8dlk\" (UniqueName: \"kubernetes.io/projected/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-kube-api-access-n8dlk\") pod \"certified-operators-hxzf6\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.275831 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:35 crc kubenswrapper[4799]: I0216 13:48:35.913039 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hxzf6"] Feb 16 13:48:36 crc kubenswrapper[4799]: I0216 13:48:36.445380 4799 generic.go:334] "Generic (PLEG): container finished" podID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerID="6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267" exitCode=0 Feb 16 13:48:36 crc kubenswrapper[4799]: I0216 13:48:36.445676 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerDied","Data":"6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267"} Feb 16 13:48:36 crc kubenswrapper[4799]: I0216 13:48:36.446434 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerStarted","Data":"4844a557a669b35f5d94d48901f05cae90a8c32615ef1848e353441fca8f7e5d"} Feb 16 13:48:38 crc kubenswrapper[4799]: I0216 13:48:38.472204 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerStarted","Data":"e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd"} Feb 16 13:48:39 crc kubenswrapper[4799]: I0216 13:48:39.485686 4799 generic.go:334] "Generic (PLEG): container finished" podID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerID="e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd" exitCode=0 Feb 16 13:48:39 crc kubenswrapper[4799]: I0216 13:48:39.485749 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerDied","Data":"e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd"} Feb 16 13:48:40 crc kubenswrapper[4799]: I0216 13:48:40.498592 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerStarted","Data":"7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127"} Feb 16 13:48:40 crc kubenswrapper[4799]: I0216 13:48:40.525720 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hxzf6" podStartSLOduration=3.106234821 podStartE2EDuration="6.525698447s" podCreationTimestamp="2026-02-16 13:48:34 +0000 UTC" firstStartedPulling="2026-02-16 13:48:36.448524053 +0000 UTC m=+4622.041539387" lastFinishedPulling="2026-02-16 13:48:39.867987669 +0000 UTC m=+4625.461003013" observedRunningTime="2026-02-16 13:48:40.514877697 +0000 UTC m=+4626.107893021" watchObservedRunningTime="2026-02-16 13:48:40.525698447 +0000 UTC m=+4626.118713771" Feb 16 13:48:42 crc kubenswrapper[4799]: I0216 13:48:42.149905 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:48:42 crc kubenswrapper[4799]: E0216 13:48:42.150872 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:48:45 crc kubenswrapper[4799]: I0216 13:48:45.277630 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:45 crc kubenswrapper[4799]: I0216 13:48:45.278578 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:45 crc kubenswrapper[4799]: I0216 13:48:45.333909 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:45 crc kubenswrapper[4799]: I0216 13:48:45.615687 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:45 crc kubenswrapper[4799]: I0216 13:48:45.678291 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hxzf6"] Feb 16 13:48:47 crc kubenswrapper[4799]: I0216 13:48:47.574673 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hxzf6" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="registry-server" containerID="cri-o://7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127" gracePeriod=2 Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.117078 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.319464 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-catalog-content\") pod \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.319667 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-utilities\") pod \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.319872 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8dlk\" (UniqueName: \"kubernetes.io/projected/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-kube-api-access-n8dlk\") pod \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\" (UID: \"1dc3db5e-46c5-44fa-8a29-3c3cd895083e\") " Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.320754 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-utilities" (OuterVolumeSpecName: "utilities") pod "1dc3db5e-46c5-44fa-8a29-3c3cd895083e" (UID: "1dc3db5e-46c5-44fa-8a29-3c3cd895083e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.332078 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-kube-api-access-n8dlk" (OuterVolumeSpecName: "kube-api-access-n8dlk") pod "1dc3db5e-46c5-44fa-8a29-3c3cd895083e" (UID: "1dc3db5e-46c5-44fa-8a29-3c3cd895083e"). InnerVolumeSpecName "kube-api-access-n8dlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.422071 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.422407 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8dlk\" (UniqueName: \"kubernetes.io/projected/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-kube-api-access-n8dlk\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.438489 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dc3db5e-46c5-44fa-8a29-3c3cd895083e" (UID: "1dc3db5e-46c5-44fa-8a29-3c3cd895083e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.524110 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc3db5e-46c5-44fa-8a29-3c3cd895083e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.586599 4799 generic.go:334] "Generic (PLEG): container finished" podID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerID="7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127" exitCode=0 Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.586641 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerDied","Data":"7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127"} Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.586668 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzf6" event={"ID":"1dc3db5e-46c5-44fa-8a29-3c3cd895083e","Type":"ContainerDied","Data":"4844a557a669b35f5d94d48901f05cae90a8c32615ef1848e353441fca8f7e5d"} Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.586686 4799 scope.go:117] "RemoveContainer" containerID="7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.586679 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzf6" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.618046 4799 scope.go:117] "RemoveContainer" containerID="e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.630336 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hxzf6"] Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.638604 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hxzf6"] Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.643689 4799 scope.go:117] "RemoveContainer" containerID="6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.687643 4799 scope.go:117] "RemoveContainer" containerID="7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127" Feb 16 13:48:48 crc kubenswrapper[4799]: E0216 13:48:48.688195 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127\": container with ID starting with 7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127 not found: ID does not exist" containerID="7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.688393 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127"} err="failed to get container status \"7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127\": rpc error: code = NotFound desc = could not find container \"7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127\": container with ID starting with 7c63225f70b27e4eff4e1d04ed22512a6d0190c00a5fa1aff5bb3cf66a977127 not found: ID does not exist" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.688549 4799 scope.go:117] "RemoveContainer" containerID="e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd" Feb 16 13:48:48 crc kubenswrapper[4799]: E0216 13:48:48.689050 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd\": container with ID starting with e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd not found: ID does not exist" containerID="e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.689091 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd"} err="failed to get container status \"e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd\": rpc error: code = NotFound desc = could not find container \"e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd\": container with ID starting with e6c90ac4eb9c9b2337bb3b0a6e4dfbafc1270600d7a72436421a16d8df3397dd not found: ID does not exist" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.689142 4799 scope.go:117] "RemoveContainer" containerID="6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267" Feb 16 13:48:48 crc kubenswrapper[4799]: E0216 13:48:48.689388 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267\": container with ID starting with 6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267 not found: ID does not exist" containerID="6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267" Feb 16 13:48:48 crc kubenswrapper[4799]: I0216 13:48:48.689423 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267"} err="failed to get container status \"6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267\": rpc error: code = NotFound desc = could not find container \"6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267\": container with ID starting with 6f9001f2822bbbf8bfc838062d727692fde89b2e6d04d320591af11420e02267 not found: ID does not exist" Feb 16 13:48:49 crc kubenswrapper[4799]: I0216 13:48:49.164398 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" path="/var/lib/kubelet/pods/1dc3db5e-46c5-44fa-8a29-3c3cd895083e/volumes" Feb 16 13:48:54 crc kubenswrapper[4799]: I0216 13:48:54.149953 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:48:54 crc kubenswrapper[4799]: E0216 13:48:54.150807 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:49:07 crc kubenswrapper[4799]: I0216 13:49:07.149866 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:49:07 crc kubenswrapper[4799]: E0216 13:49:07.151210 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:49:18 crc kubenswrapper[4799]: I0216 13:49:18.149825 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:49:18 crc kubenswrapper[4799]: E0216 13:49:18.150745 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:49:29 crc kubenswrapper[4799]: I0216 13:49:29.150114 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:49:29 crc kubenswrapper[4799]: E0216 13:49:29.151325 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:49:44 crc kubenswrapper[4799]: I0216 13:49:44.149314 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:49:44 crc kubenswrapper[4799]: E0216 13:49:44.150020 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:49:57 crc kubenswrapper[4799]: I0216 13:49:57.149865 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:49:57 crc kubenswrapper[4799]: E0216 13:49:57.150613 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:50:09 crc kubenswrapper[4799]: I0216 13:50:09.149421 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:50:09 crc kubenswrapper[4799]: E0216 13:50:09.150272 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:50:23 crc kubenswrapper[4799]: I0216 13:50:23.150310 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:50:23 crc kubenswrapper[4799]: E0216 13:50:23.151193 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:50:36 crc kubenswrapper[4799]: I0216 13:50:36.149585 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:50:36 crc kubenswrapper[4799]: E0216 13:50:36.150586 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:50:49 crc kubenswrapper[4799]: I0216 13:50:49.149191 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:50:49 crc kubenswrapper[4799]: E0216 13:50:49.149944 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:51:02 crc kubenswrapper[4799]: I0216 13:51:02.148976 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:51:02 crc kubenswrapper[4799]: E0216 13:51:02.149883 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:51:17 crc kubenswrapper[4799]: I0216 13:51:17.150038 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:51:17 crc kubenswrapper[4799]: E0216 13:51:17.150866 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:51:31 crc kubenswrapper[4799]: I0216 13:51:31.150109 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:51:31 crc kubenswrapper[4799]: E0216 13:51:31.150985 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.149903 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:51:44 crc kubenswrapper[4799]: E0216 13:51:44.150764 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.244379 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ktnd"] Feb 16 13:51:44 crc kubenswrapper[4799]: E0216 13:51:44.244996 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="extract-content" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.245082 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="extract-content" Feb 16 13:51:44 crc kubenswrapper[4799]: E0216 13:51:44.245182 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="registry-server" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.245244 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="registry-server" Feb 16 13:51:44 crc kubenswrapper[4799]: E0216 13:51:44.245325 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="extract-utilities" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.245381 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="extract-utilities" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.245632 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc3db5e-46c5-44fa-8a29-3c3cd895083e" containerName="registry-server" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.247109 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.259439 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ktnd"] Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.352043 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtd44\" (UniqueName: \"kubernetes.io/projected/bf877b78-2e0d-45fc-b704-3227cbb536cd-kube-api-access-jtd44\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.352451 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-utilities\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.352543 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-catalog-content\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.454041 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtd44\" (UniqueName: \"kubernetes.io/projected/bf877b78-2e0d-45fc-b704-3227cbb536cd-kube-api-access-jtd44\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.454147 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-utilities\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.454242 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-catalog-content\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.454776 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-utilities\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.454794 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-catalog-content\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.477656 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtd44\" (UniqueName: \"kubernetes.io/projected/bf877b78-2e0d-45fc-b704-3227cbb536cd-kube-api-access-jtd44\") pod \"community-operators-9ktnd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:44 crc kubenswrapper[4799]: I0216 13:51:44.600855 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:45 crc kubenswrapper[4799]: I0216 13:51:45.191295 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ktnd"] Feb 16 13:51:45 crc kubenswrapper[4799]: I0216 13:51:45.500600 4799 generic.go:334] "Generic (PLEG): container finished" podID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerID="2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84" exitCode=0 Feb 16 13:51:45 crc kubenswrapper[4799]: I0216 13:51:45.500720 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerDied","Data":"2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84"} Feb 16 13:51:45 crc kubenswrapper[4799]: I0216 13:51:45.501243 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerStarted","Data":"a2b29a5df99f9ce19d7e35c329a8e446ecbef7543787133d14d23442927aaa9e"} Feb 16 13:51:45 crc kubenswrapper[4799]: I0216 13:51:45.503521 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:51:46 crc kubenswrapper[4799]: I0216 13:51:46.513460 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerStarted","Data":"9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b"} Feb 16 13:51:47 crc kubenswrapper[4799]: I0216 13:51:47.523395 4799 generic.go:334] "Generic (PLEG): container finished" podID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerID="9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b" exitCode=0 Feb 16 13:51:47 crc kubenswrapper[4799]: I0216 13:51:47.523465 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerDied","Data":"9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b"} Feb 16 13:51:48 crc kubenswrapper[4799]: I0216 13:51:48.539580 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerStarted","Data":"26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593"} Feb 16 13:51:48 crc kubenswrapper[4799]: I0216 13:51:48.560492 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ktnd" podStartSLOduration=2.135606339 podStartE2EDuration="4.56047542s" podCreationTimestamp="2026-02-16 13:51:44 +0000 UTC" firstStartedPulling="2026-02-16 13:51:45.503165757 +0000 UTC m=+4811.096181091" lastFinishedPulling="2026-02-16 13:51:47.928034838 +0000 UTC m=+4813.521050172" observedRunningTime="2026-02-16 13:51:48.554415027 +0000 UTC m=+4814.147430361" watchObservedRunningTime="2026-02-16 13:51:48.56047542 +0000 UTC m=+4814.153490754" Feb 16 13:51:54 crc kubenswrapper[4799]: I0216 13:51:54.602965 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:54 crc kubenswrapper[4799]: I0216 13:51:54.603642 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:54 crc kubenswrapper[4799]: I0216 13:51:54.670300 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:55 crc kubenswrapper[4799]: I0216 13:51:55.677739 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:55 crc kubenswrapper[4799]: I0216 13:51:55.723391 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ktnd"] Feb 16 13:51:56 crc kubenswrapper[4799]: I0216 13:51:56.151639 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:51:56 crc kubenswrapper[4799]: E0216 13:51:56.152383 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:51:57 crc kubenswrapper[4799]: I0216 13:51:57.610322 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ktnd" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="registry-server" containerID="cri-o://26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593" gracePeriod=2 Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.082457 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.131560 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-catalog-content\") pod \"bf877b78-2e0d-45fc-b704-3227cbb536cd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.131714 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-utilities\") pod \"bf877b78-2e0d-45fc-b704-3227cbb536cd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.131799 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtd44\" (UniqueName: \"kubernetes.io/projected/bf877b78-2e0d-45fc-b704-3227cbb536cd-kube-api-access-jtd44\") pod \"bf877b78-2e0d-45fc-b704-3227cbb536cd\" (UID: \"bf877b78-2e0d-45fc-b704-3227cbb536cd\") " Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.133326 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-utilities" (OuterVolumeSpecName: "utilities") pod "bf877b78-2e0d-45fc-b704-3227cbb536cd" (UID: "bf877b78-2e0d-45fc-b704-3227cbb536cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.139486 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf877b78-2e0d-45fc-b704-3227cbb536cd-kube-api-access-jtd44" (OuterVolumeSpecName: "kube-api-access-jtd44") pod "bf877b78-2e0d-45fc-b704-3227cbb536cd" (UID: "bf877b78-2e0d-45fc-b704-3227cbb536cd"). InnerVolumeSpecName "kube-api-access-jtd44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.196749 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf877b78-2e0d-45fc-b704-3227cbb536cd" (UID: "bf877b78-2e0d-45fc-b704-3227cbb536cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.236046 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtd44\" (UniqueName: \"kubernetes.io/projected/bf877b78-2e0d-45fc-b704-3227cbb536cd-kube-api-access-jtd44\") on node \"crc\" DevicePath \"\"" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.236084 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.236096 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf877b78-2e0d-45fc-b704-3227cbb536cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.626621 4799 generic.go:334] "Generic (PLEG): container finished" podID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerID="26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593" exitCode=0 Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.626703 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerDied","Data":"26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593"} Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.626781 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktnd" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.626827 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktnd" event={"ID":"bf877b78-2e0d-45fc-b704-3227cbb536cd","Type":"ContainerDied","Data":"a2b29a5df99f9ce19d7e35c329a8e446ecbef7543787133d14d23442927aaa9e"} Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.626865 4799 scope.go:117] "RemoveContainer" containerID="26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.655299 4799 scope.go:117] "RemoveContainer" containerID="9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.675360 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ktnd"] Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.692027 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ktnd"] Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.700981 4799 scope.go:117] "RemoveContainer" containerID="2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.743772 4799 scope.go:117] "RemoveContainer" containerID="26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593" Feb 16 13:51:58 crc kubenswrapper[4799]: E0216 13:51:58.744233 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593\": container with ID starting with 26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593 not found: ID does not exist" containerID="26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.744308 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593"} err="failed to get container status \"26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593\": rpc error: code = NotFound desc = could not find container \"26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593\": container with ID starting with 26e9e69c3f352061428001ffcc98b8c5d971e36e7cbf20b201761fbf93cd0593 not found: ID does not exist" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.744345 4799 scope.go:117] "RemoveContainer" containerID="9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b" Feb 16 13:51:58 crc kubenswrapper[4799]: E0216 13:51:58.744785 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b\": container with ID starting with 9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b not found: ID does not exist" containerID="9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.744818 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b"} err="failed to get container status \"9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b\": rpc error: code = NotFound desc = could not find container \"9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b\": container with ID starting with 9ef250a8375ab82fe863dc4a669fc16c71458226aa6f6cf6b103e66b4425d97b not found: ID does not exist" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.744841 4799 scope.go:117] "RemoveContainer" containerID="2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84" Feb 16 13:51:58 crc kubenswrapper[4799]: E0216 13:51:58.745062 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84\": container with ID starting with 2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84 not found: ID does not exist" containerID="2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84" Feb 16 13:51:58 crc kubenswrapper[4799]: I0216 13:51:58.745088 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84"} err="failed to get container status \"2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84\": rpc error: code = NotFound desc = could not find container \"2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84\": container with ID starting with 2f9f9bec4e398439561378aafd000dca9d4e520141a75a85803fae6280361b84 not found: ID does not exist" Feb 16 13:51:59 crc kubenswrapper[4799]: I0216 13:51:59.160976 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" path="/var/lib/kubelet/pods/bf877b78-2e0d-45fc-b704-3227cbb536cd/volumes" Feb 16 13:52:09 crc kubenswrapper[4799]: I0216 13:52:09.149485 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:52:09 crc kubenswrapper[4799]: E0216 13:52:09.151298 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:52:21 crc kubenswrapper[4799]: I0216 13:52:21.150393 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:52:21 crc kubenswrapper[4799]: E0216 13:52:21.151370 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:52:33 crc kubenswrapper[4799]: I0216 13:52:33.149811 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:52:33 crc kubenswrapper[4799]: E0216 13:52:33.153001 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:52:47 crc kubenswrapper[4799]: I0216 13:52:47.149602 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:52:47 crc kubenswrapper[4799]: E0216 13:52:47.150489 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:52:58 crc kubenswrapper[4799]: I0216 13:52:58.149044 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:52:59 crc kubenswrapper[4799]: I0216 13:52:59.227102 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"a46958b187cff675e0284299155342733ca343a8cf2c459bd0a708ded46e17a8"} Feb 16 13:53:04 crc kubenswrapper[4799]: I0216 13:53:04.211072 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-9t876" podUID="347ac568-46b1-4360-90fb-22d726ea9ab5" containerName="registry-server" probeResult="failure" output=< Feb 16 13:53:04 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:53:04 crc kubenswrapper[4799]: > Feb 16 13:53:04 crc kubenswrapper[4799]: I0216 13:53:04.215946 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-9t876" podUID="347ac568-46b1-4360-90fb-22d726ea9ab5" containerName="registry-server" probeResult="failure" output=< Feb 16 13:53:04 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:53:04 crc kubenswrapper[4799]: > Feb 16 13:53:10 crc kubenswrapper[4799]: I0216 13:53:10.934141 4799 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7f54946f5f-2jrb5" podUID="441c04e7-2794-48cf-bc03-4c13536d22c4" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.033508 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5twch"] Feb 16 13:55:00 crc kubenswrapper[4799]: E0216 13:55:00.035509 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="registry-server" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.035593 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="registry-server" Feb 16 13:55:00 crc kubenswrapper[4799]: E0216 13:55:00.035686 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="extract-utilities" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.036412 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="extract-utilities" Feb 16 13:55:00 crc kubenswrapper[4799]: E0216 13:55:00.036484 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="extract-content" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.036569 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="extract-content" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.036826 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf877b78-2e0d-45fc-b704-3227cbb536cd" containerName="registry-server" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.038728 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.051113 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5twch"] Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.132667 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7vw\" (UniqueName: \"kubernetes.io/projected/bcf88afe-6540-4494-aae0-2d16af3cc3a2-kube-api-access-kx7vw\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.133115 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-catalog-content\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.133327 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-utilities\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.235078 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7vw\" (UniqueName: \"kubernetes.io/projected/bcf88afe-6540-4494-aae0-2d16af3cc3a2-kube-api-access-kx7vw\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.235281 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-catalog-content\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.235615 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-utilities\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.236590 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-utilities\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.236819 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-catalog-content\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.259300 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7vw\" (UniqueName: \"kubernetes.io/projected/bcf88afe-6540-4494-aae0-2d16af3cc3a2-kube-api-access-kx7vw\") pod \"redhat-operators-5twch\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.373401 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:00 crc kubenswrapper[4799]: I0216 13:55:00.941438 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5twch"] Feb 16 13:55:01 crc kubenswrapper[4799]: I0216 13:55:01.366268 4799 generic.go:334] "Generic (PLEG): container finished" podID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerID="a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48" exitCode=0 Feb 16 13:55:01 crc kubenswrapper[4799]: I0216 13:55:01.366466 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerDied","Data":"a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48"} Feb 16 13:55:01 crc kubenswrapper[4799]: I0216 13:55:01.366603 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerStarted","Data":"391b38f00d28a5940554483439c84fd6ac61b04207d6b4bbc9bebb8065983557"} Feb 16 13:55:03 crc kubenswrapper[4799]: I0216 13:55:03.384761 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerStarted","Data":"c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea"} Feb 16 13:55:07 crc kubenswrapper[4799]: I0216 13:55:07.420174 4799 generic.go:334] "Generic (PLEG): container finished" podID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerID="c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea" exitCode=0 Feb 16 13:55:07 crc kubenswrapper[4799]: I0216 13:55:07.420462 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerDied","Data":"c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea"} Feb 16 13:55:09 crc kubenswrapper[4799]: I0216 13:55:09.441340 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerStarted","Data":"735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405"} Feb 16 13:55:09 crc kubenswrapper[4799]: I0216 13:55:09.459726 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5twch" podStartSLOduration=1.690510482 podStartE2EDuration="9.459705369s" podCreationTimestamp="2026-02-16 13:55:00 +0000 UTC" firstStartedPulling="2026-02-16 13:55:01.368284743 +0000 UTC m=+5006.961300087" lastFinishedPulling="2026-02-16 13:55:09.13747964 +0000 UTC m=+5014.730494974" observedRunningTime="2026-02-16 13:55:09.458419142 +0000 UTC m=+5015.051434516" watchObservedRunningTime="2026-02-16 13:55:09.459705369 +0000 UTC m=+5015.052720703" Feb 16 13:55:10 crc kubenswrapper[4799]: I0216 13:55:10.374454 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:10 crc kubenswrapper[4799]: I0216 13:55:10.374584 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:11 crc kubenswrapper[4799]: I0216 13:55:11.443256 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5twch" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" probeResult="failure" output=< Feb 16 13:55:11 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:55:11 crc kubenswrapper[4799]: > Feb 16 13:55:21 crc kubenswrapper[4799]: I0216 13:55:21.421629 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5twch" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" probeResult="failure" output=< Feb 16 13:55:21 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:55:21 crc kubenswrapper[4799]: > Feb 16 13:55:21 crc kubenswrapper[4799]: I0216 13:55:21.793040 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:55:21 crc kubenswrapper[4799]: I0216 13:55:21.793171 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:55:31 crc kubenswrapper[4799]: I0216 13:55:31.424627 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5twch" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" probeResult="failure" output=< Feb 16 13:55:31 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 13:55:31 crc kubenswrapper[4799]: > Feb 16 13:55:40 crc kubenswrapper[4799]: I0216 13:55:40.421792 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:40 crc kubenswrapper[4799]: I0216 13:55:40.487795 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:40 crc kubenswrapper[4799]: I0216 13:55:40.658067 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5twch"] Feb 16 13:55:41 crc kubenswrapper[4799]: I0216 13:55:41.719341 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5twch" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" containerID="cri-o://735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405" gracePeriod=2 Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.216218 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.348107 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-catalog-content\") pod \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.348226 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-utilities\") pod \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.348349 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7vw\" (UniqueName: \"kubernetes.io/projected/bcf88afe-6540-4494-aae0-2d16af3cc3a2-kube-api-access-kx7vw\") pod \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\" (UID: \"bcf88afe-6540-4494-aae0-2d16af3cc3a2\") " Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.349548 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-utilities" (OuterVolumeSpecName: "utilities") pod "bcf88afe-6540-4494-aae0-2d16af3cc3a2" (UID: "bcf88afe-6540-4494-aae0-2d16af3cc3a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.358954 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf88afe-6540-4494-aae0-2d16af3cc3a2-kube-api-access-kx7vw" (OuterVolumeSpecName: "kube-api-access-kx7vw") pod "bcf88afe-6540-4494-aae0-2d16af3cc3a2" (UID: "bcf88afe-6540-4494-aae0-2d16af3cc3a2"). InnerVolumeSpecName "kube-api-access-kx7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.450315 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7vw\" (UniqueName: \"kubernetes.io/projected/bcf88afe-6540-4494-aae0-2d16af3cc3a2-kube-api-access-kx7vw\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.450358 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.475105 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcf88afe-6540-4494-aae0-2d16af3cc3a2" (UID: "bcf88afe-6540-4494-aae0-2d16af3cc3a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.552685 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf88afe-6540-4494-aae0-2d16af3cc3a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.729526 4799 generic.go:334] "Generic (PLEG): container finished" podID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerID="735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405" exitCode=0 Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.729566 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerDied","Data":"735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405"} Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.729594 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5twch" event={"ID":"bcf88afe-6540-4494-aae0-2d16af3cc3a2","Type":"ContainerDied","Data":"391b38f00d28a5940554483439c84fd6ac61b04207d6b4bbc9bebb8065983557"} Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.729613 4799 scope.go:117] "RemoveContainer" containerID="735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.729740 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5twch" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.755389 4799 scope.go:117] "RemoveContainer" containerID="c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.774049 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5twch"] Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.778993 4799 scope.go:117] "RemoveContainer" containerID="a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.782874 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5twch"] Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.820149 4799 scope.go:117] "RemoveContainer" containerID="735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405" Feb 16 13:55:42 crc kubenswrapper[4799]: E0216 13:55:42.820514 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405\": container with ID starting with 735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405 not found: ID does not exist" containerID="735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.820546 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405"} err="failed to get container status \"735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405\": rpc error: code = NotFound desc = could not find container \"735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405\": container with ID starting with 735318809ccd79c7dccea03cd7d2b4b77c6597e673884e3120e164620ad25405 not found: ID does not exist" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.820570 4799 scope.go:117] "RemoveContainer" containerID="c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea" Feb 16 13:55:42 crc kubenswrapper[4799]: E0216 13:55:42.820746 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea\": container with ID starting with c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea not found: ID does not exist" containerID="c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.820771 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea"} err="failed to get container status \"c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea\": rpc error: code = NotFound desc = could not find container \"c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea\": container with ID starting with c919b6f62bcaf7cc5ad038be33b005814f59d025f4a03b1ad6da198e4a5547ea not found: ID does not exist" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.820785 4799 scope.go:117] "RemoveContainer" containerID="a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48" Feb 16 13:55:42 crc kubenswrapper[4799]: E0216 13:55:42.821693 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48\": container with ID starting with a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48 not found: ID does not exist" containerID="a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48" Feb 16 13:55:42 crc kubenswrapper[4799]: I0216 13:55:42.821724 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48"} err="failed to get container status \"a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48\": rpc error: code = NotFound desc = could not find container \"a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48\": container with ID starting with a673576c0ef383e6806ffafca9017cdf898b9451ed78903f28f426d6fca9ac48 not found: ID does not exist" Feb 16 13:55:43 crc kubenswrapper[4799]: I0216 13:55:43.178418 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" path="/var/lib/kubelet/pods/bcf88afe-6540-4494-aae0-2d16af3cc3a2/volumes" Feb 16 13:55:51 crc kubenswrapper[4799]: I0216 13:55:51.793408 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:55:51 crc kubenswrapper[4799]: I0216 13:55:51.793872 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:56:21 crc kubenswrapper[4799]: I0216 13:56:21.793485 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:56:21 crc kubenswrapper[4799]: I0216 13:56:21.794143 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:56:21 crc kubenswrapper[4799]: I0216 13:56:21.794206 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:56:21 crc kubenswrapper[4799]: I0216 13:56:21.795250 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a46958b187cff675e0284299155342733ca343a8cf2c459bd0a708ded46e17a8"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:56:21 crc kubenswrapper[4799]: I0216 13:56:21.795345 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://a46958b187cff675e0284299155342733ca343a8cf2c459bd0a708ded46e17a8" gracePeriod=600 Feb 16 13:56:22 crc kubenswrapper[4799]: I0216 13:56:22.085502 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="a46958b187cff675e0284299155342733ca343a8cf2c459bd0a708ded46e17a8" exitCode=0 Feb 16 13:56:22 crc kubenswrapper[4799]: I0216 13:56:22.085591 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"a46958b187cff675e0284299155342733ca343a8cf2c459bd0a708ded46e17a8"} Feb 16 13:56:22 crc kubenswrapper[4799]: I0216 13:56:22.085877 4799 scope.go:117] "RemoveContainer" containerID="657e4eaf4b63d60731028e1e5ea5e833990638a6c170cf759a75c9967b04cd5b" Feb 16 13:56:23 crc kubenswrapper[4799]: I0216 13:56:23.101387 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94"} Feb 16 13:57:53 crc kubenswrapper[4799]: I0216 13:57:53.971935 4799 generic.go:334] "Generic (PLEG): container finished" podID="c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" containerID="a9a6e13c0a18bdc351ebd3beaf74596ed2b51864d1f79982a126a39cbcca41bb" exitCode=0 Feb 16 13:57:53 crc kubenswrapper[4799]: I0216 13:57:53.972046 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd","Type":"ContainerDied","Data":"a9a6e13c0a18bdc351ebd3beaf74596ed2b51864d1f79982a126a39cbcca41bb"} Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.414570 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.551430 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config-secret\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.551840 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-workdir\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.551882 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqtg7\" (UniqueName: \"kubernetes.io/projected/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-kube-api-access-pqtg7\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.551933 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-config-data\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.551952 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.552006 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-temporary\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.552066 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ca-certs\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.552117 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ssh-key\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.552169 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config\") pod \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\" (UID: \"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd\") " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.552749 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.553059 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-config-data" (OuterVolumeSpecName: "config-data") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.558711 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-kube-api-access-pqtg7" (OuterVolumeSpecName: "kube-api-access-pqtg7") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "kube-api-access-pqtg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.558850 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.559634 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.587085 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.587195 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.592555 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.618716 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" (UID: "c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.654958 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqtg7\" (UniqueName: \"kubernetes.io/projected/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-kube-api-access-pqtg7\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655009 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655049 4799 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655072 4799 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655088 4799 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655101 4799 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655112 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655139 4799 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.655151 4799 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.686996 4799 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.757224 4799 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.993323 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd","Type":"ContainerDied","Data":"915b3a462290921fad9deba60ec8e0e496a05c41c528c4e1596be91697adb44d"} Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.993639 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="915b3a462290921fad9deba60ec8e0e496a05c41c528c4e1596be91697adb44d" Feb 16 13:57:55 crc kubenswrapper[4799]: I0216 13:57:55.993395 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.099355 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:58:05 crc kubenswrapper[4799]: E0216 13:58:05.100316 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="extract-content" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.100330 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="extract-content" Feb 16 13:58:05 crc kubenswrapper[4799]: E0216 13:58:05.100350 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="extract-utilities" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.100358 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="extract-utilities" Feb 16 13:58:05 crc kubenswrapper[4799]: E0216 13:58:05.100385 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.100392 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" Feb 16 13:58:05 crc kubenswrapper[4799]: E0216 13:58:05.100410 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.100416 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.100584 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf88afe-6540-4494-aae0-2d16af3cc3a2" containerName="registry-server" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.100611 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.101291 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.103059 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zhw5r" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.108324 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.259711 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.259830 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhfp\" (UniqueName: \"kubernetes.io/projected/ba576df3-d525-4b57-9913-4c2c86246682-kube-api-access-7jhfp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.362240 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.362424 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhfp\" (UniqueName: \"kubernetes.io/projected/ba576df3-d525-4b57-9913-4c2c86246682-kube-api-access-7jhfp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.362722 4799 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.389504 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhfp\" (UniqueName: \"kubernetes.io/projected/ba576df3-d525-4b57-9913-4c2c86246682-kube-api-access-7jhfp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.399821 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ba576df3-d525-4b57-9913-4c2c86246682\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.453648 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.934280 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:58:05 crc kubenswrapper[4799]: I0216 13:58:05.946554 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:58:06 crc kubenswrapper[4799]: I0216 13:58:06.079035 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ba576df3-d525-4b57-9913-4c2c86246682","Type":"ContainerStarted","Data":"07026ec894e5537a4a2d6e57721ff44afbb374528d88e69dbf283888ce262713"} Feb 16 13:58:07 crc kubenswrapper[4799]: I0216 13:58:07.091100 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ba576df3-d525-4b57-9913-4c2c86246682","Type":"ContainerStarted","Data":"7450e2ccb2b12c9c238f2e22e9269bdf140d21a057c78811cb1064c922da0ed6"} Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.798017 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=26.97598467 podStartE2EDuration="27.797999105s" podCreationTimestamp="2026-02-16 13:58:05 +0000 UTC" firstStartedPulling="2026-02-16 13:58:05.946307858 +0000 UTC m=+5191.539323192" lastFinishedPulling="2026-02-16 13:58:06.768322293 +0000 UTC m=+5192.361337627" observedRunningTime="2026-02-16 13:58:07.109885897 +0000 UTC m=+5192.702901231" watchObservedRunningTime="2026-02-16 13:58:32.797999105 +0000 UTC m=+5218.391014439" Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.806914 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h5p99/must-gather-vvkcn"] Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.808514 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.809925 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h5p99"/"openshift-service-ca.crt" Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.812182 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h5p99"/"kube-root-ca.crt" Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.821040 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h5p99/must-gather-vvkcn"] Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.827404 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-h5p99"/"default-dockercfg-5tp6k" Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.970973 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2kx\" (UniqueName: \"kubernetes.io/projected/e7f6ad70-d861-46e3-a282-d134389f05fb-kube-api-access-9x2kx\") pod \"must-gather-vvkcn\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:32 crc kubenswrapper[4799]: I0216 13:58:32.971034 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7f6ad70-d861-46e3-a282-d134389f05fb-must-gather-output\") pod \"must-gather-vvkcn\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:33 crc kubenswrapper[4799]: I0216 13:58:33.073417 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2kx\" (UniqueName: \"kubernetes.io/projected/e7f6ad70-d861-46e3-a282-d134389f05fb-kube-api-access-9x2kx\") pod \"must-gather-vvkcn\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:33 crc kubenswrapper[4799]: I0216 13:58:33.073486 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7f6ad70-d861-46e3-a282-d134389f05fb-must-gather-output\") pod \"must-gather-vvkcn\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:33 crc kubenswrapper[4799]: I0216 13:58:33.073904 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7f6ad70-d861-46e3-a282-d134389f05fb-must-gather-output\") pod \"must-gather-vvkcn\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:33 crc kubenswrapper[4799]: I0216 13:58:33.095318 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2kx\" (UniqueName: \"kubernetes.io/projected/e7f6ad70-d861-46e3-a282-d134389f05fb-kube-api-access-9x2kx\") pod \"must-gather-vvkcn\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:33 crc kubenswrapper[4799]: I0216 13:58:33.128820 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 13:58:33 crc kubenswrapper[4799]: I0216 13:58:33.623221 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h5p99/must-gather-vvkcn"] Feb 16 13:58:34 crc kubenswrapper[4799]: I0216 13:58:34.378672 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/must-gather-vvkcn" event={"ID":"e7f6ad70-d861-46e3-a282-d134389f05fb","Type":"ContainerStarted","Data":"e6bbe65cd6a0cdf57c88e17716860cd778a7a58840d52e3a050fe1905a9581aa"} Feb 16 13:58:40 crc kubenswrapper[4799]: I0216 13:58:40.456080 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/must-gather-vvkcn" event={"ID":"e7f6ad70-d861-46e3-a282-d134389f05fb","Type":"ContainerStarted","Data":"1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7"} Feb 16 13:58:40 crc kubenswrapper[4799]: I0216 13:58:40.456892 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/must-gather-vvkcn" event={"ID":"e7f6ad70-d861-46e3-a282-d134389f05fb","Type":"ContainerStarted","Data":"bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561"} Feb 16 13:58:40 crc kubenswrapper[4799]: I0216 13:58:40.484717 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h5p99/must-gather-vvkcn" podStartSLOduration=2.350740048 podStartE2EDuration="8.484698448s" podCreationTimestamp="2026-02-16 13:58:32 +0000 UTC" firstStartedPulling="2026-02-16 13:58:33.62704015 +0000 UTC m=+5219.220055484" lastFinishedPulling="2026-02-16 13:58:39.76099855 +0000 UTC m=+5225.354013884" observedRunningTime="2026-02-16 13:58:40.480786937 +0000 UTC m=+5226.073802261" watchObservedRunningTime="2026-02-16 13:58:40.484698448 +0000 UTC m=+5226.077713782" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.691106 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h5p99/crc-debug-49f4k"] Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.693154 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.768020 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60fa6db6-86b7-48a4-9427-099b9f3b81c1-host\") pod \"crc-debug-49f4k\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.768105 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whtck\" (UniqueName: \"kubernetes.io/projected/60fa6db6-86b7-48a4-9427-099b9f3b81c1-kube-api-access-whtck\") pod \"crc-debug-49f4k\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.869154 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60fa6db6-86b7-48a4-9427-099b9f3b81c1-host\") pod \"crc-debug-49f4k\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.869435 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whtck\" (UniqueName: \"kubernetes.io/projected/60fa6db6-86b7-48a4-9427-099b9f3b81c1-kube-api-access-whtck\") pod \"crc-debug-49f4k\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.869341 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60fa6db6-86b7-48a4-9427-099b9f3b81c1-host\") pod \"crc-debug-49f4k\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:43 crc kubenswrapper[4799]: I0216 13:58:43.889216 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whtck\" (UniqueName: \"kubernetes.io/projected/60fa6db6-86b7-48a4-9427-099b9f3b81c1-kube-api-access-whtck\") pod \"crc-debug-49f4k\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:44 crc kubenswrapper[4799]: I0216 13:58:44.015515 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:58:44 crc kubenswrapper[4799]: W0216 13:58:44.079424 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60fa6db6_86b7_48a4_9427_099b9f3b81c1.slice/crio-7bc07247c86580fdf41916f4cb21a913e7f271586e68bb896a685e61cc830401 WatchSource:0}: Error finding container 7bc07247c86580fdf41916f4cb21a913e7f271586e68bb896a685e61cc830401: Status 404 returned error can't find the container with id 7bc07247c86580fdf41916f4cb21a913e7f271586e68bb896a685e61cc830401 Feb 16 13:58:44 crc kubenswrapper[4799]: I0216 13:58:44.491899 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-49f4k" event={"ID":"60fa6db6-86b7-48a4-9427-099b9f3b81c1","Type":"ContainerStarted","Data":"7bc07247c86580fdf41916f4cb21a913e7f271586e68bb896a685e61cc830401"} Feb 16 13:58:45 crc kubenswrapper[4799]: E0216 13:58:45.457730 4799 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:39314->38.102.83.154:41287: write tcp 38.102.83.154:39314->38.102.83.154:41287: write: broken pipe Feb 16 13:58:51 crc kubenswrapper[4799]: I0216 13:58:51.792981 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:58:51 crc kubenswrapper[4799]: I0216 13:58:51.793620 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:58:55 crc kubenswrapper[4799]: I0216 13:58:55.612622 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-49f4k" event={"ID":"60fa6db6-86b7-48a4-9427-099b9f3b81c1","Type":"ContainerStarted","Data":"65252dafa5da9f7362cfcc89396b3f4bd498fbaff9712b56f4a66459665157d8"} Feb 16 13:58:55 crc kubenswrapper[4799]: I0216 13:58:55.632451 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h5p99/crc-debug-49f4k" podStartSLOduration=2.182736391 podStartE2EDuration="12.632431165s" podCreationTimestamp="2026-02-16 13:58:43 +0000 UTC" firstStartedPulling="2026-02-16 13:58:44.082725584 +0000 UTC m=+5229.675740928" lastFinishedPulling="2026-02-16 13:58:54.532420368 +0000 UTC m=+5240.125435702" observedRunningTime="2026-02-16 13:58:55.625393226 +0000 UTC m=+5241.218408570" watchObservedRunningTime="2026-02-16 13:58:55.632431165 +0000 UTC m=+5241.225446499" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.326932 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bc967"] Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.330002 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.361257 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bc967"] Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.472896 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-utilities\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.473360 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-catalog-content\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.473396 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lpdw\" (UniqueName: \"kubernetes.io/projected/37061df9-026d-4a9b-b733-1ce9b40a90b1-kube-api-access-8lpdw\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.574802 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-catalog-content\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.575058 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lpdw\" (UniqueName: \"kubernetes.io/projected/37061df9-026d-4a9b-b733-1ce9b40a90b1-kube-api-access-8lpdw\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.575203 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-utilities\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.575559 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-catalog-content\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.575855 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-utilities\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.596499 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lpdw\" (UniqueName: \"kubernetes.io/projected/37061df9-026d-4a9b-b733-1ce9b40a90b1-kube-api-access-8lpdw\") pod \"certified-operators-bc967\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:04 crc kubenswrapper[4799]: I0216 13:59:04.660532 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:05 crc kubenswrapper[4799]: I0216 13:59:05.220861 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bc967"] Feb 16 13:59:05 crc kubenswrapper[4799]: I0216 13:59:05.720698 4799 generic.go:334] "Generic (PLEG): container finished" podID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerID="bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb" exitCode=0 Feb 16 13:59:05 crc kubenswrapper[4799]: I0216 13:59:05.720893 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerDied","Data":"bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb"} Feb 16 13:59:05 crc kubenswrapper[4799]: I0216 13:59:05.721441 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerStarted","Data":"e6ab7d9e049611fae705fa76be4d6a555a5f4aae66807093ebf3599bfb9b8930"} Feb 16 13:59:09 crc kubenswrapper[4799]: I0216 13:59:09.759996 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerStarted","Data":"f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca"} Feb 16 13:59:10 crc kubenswrapper[4799]: I0216 13:59:10.771723 4799 generic.go:334] "Generic (PLEG): container finished" podID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerID="f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca" exitCode=0 Feb 16 13:59:10 crc kubenswrapper[4799]: I0216 13:59:10.771811 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerDied","Data":"f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca"} Feb 16 13:59:11 crc kubenswrapper[4799]: I0216 13:59:11.783101 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerStarted","Data":"5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444"} Feb 16 13:59:11 crc kubenswrapper[4799]: I0216 13:59:11.803368 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bc967" podStartSLOduration=2.086035531 podStartE2EDuration="7.80334224s" podCreationTimestamp="2026-02-16 13:59:04 +0000 UTC" firstStartedPulling="2026-02-16 13:59:05.725478949 +0000 UTC m=+5251.318494283" lastFinishedPulling="2026-02-16 13:59:11.442785648 +0000 UTC m=+5257.035800992" observedRunningTime="2026-02-16 13:59:11.799420759 +0000 UTC m=+5257.392436093" watchObservedRunningTime="2026-02-16 13:59:11.80334224 +0000 UTC m=+5257.396357594" Feb 16 13:59:14 crc kubenswrapper[4799]: I0216 13:59:14.660942 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:14 crc kubenswrapper[4799]: I0216 13:59:14.661423 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:14 crc kubenswrapper[4799]: I0216 13:59:14.712793 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:21 crc kubenswrapper[4799]: I0216 13:59:21.792537 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:59:21 crc kubenswrapper[4799]: I0216 13:59:21.793177 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:59:24 crc kubenswrapper[4799]: I0216 13:59:24.727169 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:24 crc kubenswrapper[4799]: I0216 13:59:24.775457 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bc967"] Feb 16 13:59:24 crc kubenswrapper[4799]: I0216 13:59:24.921093 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bc967" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="registry-server" containerID="cri-o://5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444" gracePeriod=2 Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.410952 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.446509 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-catalog-content\") pod \"37061df9-026d-4a9b-b733-1ce9b40a90b1\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.446620 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-utilities\") pod \"37061df9-026d-4a9b-b733-1ce9b40a90b1\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.446742 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lpdw\" (UniqueName: \"kubernetes.io/projected/37061df9-026d-4a9b-b733-1ce9b40a90b1-kube-api-access-8lpdw\") pod \"37061df9-026d-4a9b-b733-1ce9b40a90b1\" (UID: \"37061df9-026d-4a9b-b733-1ce9b40a90b1\") " Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.450384 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-utilities" (OuterVolumeSpecName: "utilities") pod "37061df9-026d-4a9b-b733-1ce9b40a90b1" (UID: "37061df9-026d-4a9b-b733-1ce9b40a90b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.513283 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37061df9-026d-4a9b-b733-1ce9b40a90b1" (UID: "37061df9-026d-4a9b-b733-1ce9b40a90b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.550241 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.550277 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37061df9-026d-4a9b-b733-1ce9b40a90b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.817772 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37061df9-026d-4a9b-b733-1ce9b40a90b1-kube-api-access-8lpdw" (OuterVolumeSpecName: "kube-api-access-8lpdw") pod "37061df9-026d-4a9b-b733-1ce9b40a90b1" (UID: "37061df9-026d-4a9b-b733-1ce9b40a90b1"). InnerVolumeSpecName "kube-api-access-8lpdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.855269 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lpdw\" (UniqueName: \"kubernetes.io/projected/37061df9-026d-4a9b-b733-1ce9b40a90b1-kube-api-access-8lpdw\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.945757 4799 generic.go:334] "Generic (PLEG): container finished" podID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerID="5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444" exitCode=0 Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.945833 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bc967" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.945841 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerDied","Data":"5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444"} Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.945900 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bc967" event={"ID":"37061df9-026d-4a9b-b733-1ce9b40a90b1","Type":"ContainerDied","Data":"e6ab7d9e049611fae705fa76be4d6a555a5f4aae66807093ebf3599bfb9b8930"} Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.945920 4799 scope.go:117] "RemoveContainer" containerID="5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444" Feb 16 13:59:25 crc kubenswrapper[4799]: I0216 13:59:25.971525 4799 scope.go:117] "RemoveContainer" containerID="f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.016731 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bc967"] Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.035707 4799 scope.go:117] "RemoveContainer" containerID="bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.037150 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bc967"] Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.067472 4799 scope.go:117] "RemoveContainer" containerID="5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444" Feb 16 13:59:26 crc kubenswrapper[4799]: E0216 13:59:26.068031 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444\": container with ID starting with 5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444 not found: ID does not exist" containerID="5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.068081 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444"} err="failed to get container status \"5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444\": rpc error: code = NotFound desc = could not find container \"5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444\": container with ID starting with 5e00bce0c145cc98ce50d557d40409b79f35841f3decb1f9dabe9060445d2444 not found: ID does not exist" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.068132 4799 scope.go:117] "RemoveContainer" containerID="f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca" Feb 16 13:59:26 crc kubenswrapper[4799]: E0216 13:59:26.068483 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca\": container with ID starting with f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca not found: ID does not exist" containerID="f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.068526 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca"} err="failed to get container status \"f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca\": rpc error: code = NotFound desc = could not find container \"f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca\": container with ID starting with f01966e8db5e29c9a1bafe0fa0316218f278bced02962514ba05c72725f360ca not found: ID does not exist" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.068541 4799 scope.go:117] "RemoveContainer" containerID="bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb" Feb 16 13:59:26 crc kubenswrapper[4799]: E0216 13:59:26.068947 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb\": container with ID starting with bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb not found: ID does not exist" containerID="bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb" Feb 16 13:59:26 crc kubenswrapper[4799]: I0216 13:59:26.068970 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb"} err="failed to get container status \"bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb\": rpc error: code = NotFound desc = could not find container \"bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb\": container with ID starting with bfbe1d04353603883ef4b3829f5f52ae27c8ce3536c64d3d72db8df9b5ec57cb not found: ID does not exist" Feb 16 13:59:27 crc kubenswrapper[4799]: I0216 13:59:27.163507 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" path="/var/lib/kubelet/pods/37061df9-026d-4a9b-b733-1ce9b40a90b1/volumes" Feb 16 13:59:43 crc kubenswrapper[4799]: I0216 13:59:43.112415 4799 generic.go:334] "Generic (PLEG): container finished" podID="60fa6db6-86b7-48a4-9427-099b9f3b81c1" containerID="65252dafa5da9f7362cfcc89396b3f4bd498fbaff9712b56f4a66459665157d8" exitCode=0 Feb 16 13:59:43 crc kubenswrapper[4799]: I0216 13:59:43.112559 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-49f4k" event={"ID":"60fa6db6-86b7-48a4-9427-099b9f3b81c1","Type":"ContainerDied","Data":"65252dafa5da9f7362cfcc89396b3f4bd498fbaff9712b56f4a66459665157d8"} Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.225949 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.277714 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h5p99/crc-debug-49f4k"] Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.297559 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h5p99/crc-debug-49f4k"] Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.351582 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60fa6db6-86b7-48a4-9427-099b9f3b81c1-host\") pod \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.351816 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whtck\" (UniqueName: \"kubernetes.io/projected/60fa6db6-86b7-48a4-9427-099b9f3b81c1-kube-api-access-whtck\") pod \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\" (UID: \"60fa6db6-86b7-48a4-9427-099b9f3b81c1\") " Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.352247 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60fa6db6-86b7-48a4-9427-099b9f3b81c1-host" (OuterVolumeSpecName: "host") pod "60fa6db6-86b7-48a4-9427-099b9f3b81c1" (UID: "60fa6db6-86b7-48a4-9427-099b9f3b81c1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.352544 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60fa6db6-86b7-48a4-9427-099b9f3b81c1-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.360730 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fa6db6-86b7-48a4-9427-099b9f3b81c1-kube-api-access-whtck" (OuterVolumeSpecName: "kube-api-access-whtck") pod "60fa6db6-86b7-48a4-9427-099b9f3b81c1" (UID: "60fa6db6-86b7-48a4-9427-099b9f3b81c1"). InnerVolumeSpecName "kube-api-access-whtck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:59:44 crc kubenswrapper[4799]: I0216 13:59:44.454217 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whtck\" (UniqueName: \"kubernetes.io/projected/60fa6db6-86b7-48a4-9427-099b9f3b81c1-kube-api-access-whtck\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.133285 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc07247c86580fdf41916f4cb21a913e7f271586e68bb896a685e61cc830401" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.133622 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-49f4k" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.171199 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fa6db6-86b7-48a4-9427-099b9f3b81c1" path="/var/lib/kubelet/pods/60fa6db6-86b7-48a4-9427-099b9f3b81c1/volumes" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.496461 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h5p99/crc-debug-vtl9b"] Feb 16 13:59:45 crc kubenswrapper[4799]: E0216 13:59:45.496886 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="registry-server" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.496900 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="registry-server" Feb 16 13:59:45 crc kubenswrapper[4799]: E0216 13:59:45.496918 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fa6db6-86b7-48a4-9427-099b9f3b81c1" containerName="container-00" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.496924 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fa6db6-86b7-48a4-9427-099b9f3b81c1" containerName="container-00" Feb 16 13:59:45 crc kubenswrapper[4799]: E0216 13:59:45.496936 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="extract-utilities" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.496943 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="extract-utilities" Feb 16 13:59:45 crc kubenswrapper[4799]: E0216 13:59:45.496964 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="extract-content" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.496969 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="extract-content" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.497174 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="37061df9-026d-4a9b-b733-1ce9b40a90b1" containerName="registry-server" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.497195 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fa6db6-86b7-48a4-9427-099b9f3b81c1" containerName="container-00" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.497900 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.577790 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9kt8\" (UniqueName: \"kubernetes.io/projected/561a8da5-5f25-427c-8b24-bf8af25b73db-kube-api-access-s9kt8\") pod \"crc-debug-vtl9b\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.578077 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561a8da5-5f25-427c-8b24-bf8af25b73db-host\") pod \"crc-debug-vtl9b\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.679606 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9kt8\" (UniqueName: \"kubernetes.io/projected/561a8da5-5f25-427c-8b24-bf8af25b73db-kube-api-access-s9kt8\") pod \"crc-debug-vtl9b\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.679784 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561a8da5-5f25-427c-8b24-bf8af25b73db-host\") pod \"crc-debug-vtl9b\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.679886 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561a8da5-5f25-427c-8b24-bf8af25b73db-host\") pod \"crc-debug-vtl9b\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.703463 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9kt8\" (UniqueName: \"kubernetes.io/projected/561a8da5-5f25-427c-8b24-bf8af25b73db-kube-api-access-s9kt8\") pod \"crc-debug-vtl9b\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:45 crc kubenswrapper[4799]: I0216 13:59:45.813331 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:46 crc kubenswrapper[4799]: I0216 13:59:46.141569 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" event={"ID":"561a8da5-5f25-427c-8b24-bf8af25b73db","Type":"ContainerStarted","Data":"6bed41a9b59c2473a16ded68a5b7134796c47239def7e0549cb006861ca34ee0"} Feb 16 13:59:47 crc kubenswrapper[4799]: I0216 13:59:47.162969 4799 generic.go:334] "Generic (PLEG): container finished" podID="561a8da5-5f25-427c-8b24-bf8af25b73db" containerID="c2a6c575e0266f657e410d6c97e83cc93be7b123ba4cf13cd76b595c6f8f6000" exitCode=0 Feb 16 13:59:47 crc kubenswrapper[4799]: I0216 13:59:47.170982 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" event={"ID":"561a8da5-5f25-427c-8b24-bf8af25b73db","Type":"ContainerDied","Data":"c2a6c575e0266f657e410d6c97e83cc93be7b123ba4cf13cd76b595c6f8f6000"} Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.302009 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.468346 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561a8da5-5f25-427c-8b24-bf8af25b73db-host\") pod \"561a8da5-5f25-427c-8b24-bf8af25b73db\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.468699 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9kt8\" (UniqueName: \"kubernetes.io/projected/561a8da5-5f25-427c-8b24-bf8af25b73db-kube-api-access-s9kt8\") pod \"561a8da5-5f25-427c-8b24-bf8af25b73db\" (UID: \"561a8da5-5f25-427c-8b24-bf8af25b73db\") " Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.468427 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/561a8da5-5f25-427c-8b24-bf8af25b73db-host" (OuterVolumeSpecName: "host") pod "561a8da5-5f25-427c-8b24-bf8af25b73db" (UID: "561a8da5-5f25-427c-8b24-bf8af25b73db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.469761 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561a8da5-5f25-427c-8b24-bf8af25b73db-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.485052 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561a8da5-5f25-427c-8b24-bf8af25b73db-kube-api-access-s9kt8" (OuterVolumeSpecName: "kube-api-access-s9kt8") pod "561a8da5-5f25-427c-8b24-bf8af25b73db" (UID: "561a8da5-5f25-427c-8b24-bf8af25b73db"). InnerVolumeSpecName "kube-api-access-s9kt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:59:48 crc kubenswrapper[4799]: I0216 13:59:48.571277 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9kt8\" (UniqueName: \"kubernetes.io/projected/561a8da5-5f25-427c-8b24-bf8af25b73db-kube-api-access-s9kt8\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:49 crc kubenswrapper[4799]: I0216 13:59:49.189569 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" event={"ID":"561a8da5-5f25-427c-8b24-bf8af25b73db","Type":"ContainerDied","Data":"6bed41a9b59c2473a16ded68a5b7134796c47239def7e0549cb006861ca34ee0"} Feb 16 13:59:49 crc kubenswrapper[4799]: I0216 13:59:49.189611 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bed41a9b59c2473a16ded68a5b7134796c47239def7e0549cb006861ca34ee0" Feb 16 13:59:49 crc kubenswrapper[4799]: I0216 13:59:49.189660 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-vtl9b" Feb 16 13:59:50 crc kubenswrapper[4799]: I0216 13:59:50.005565 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h5p99/crc-debug-vtl9b"] Feb 16 13:59:50 crc kubenswrapper[4799]: I0216 13:59:50.015277 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h5p99/crc-debug-vtl9b"] Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.164274 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561a8da5-5f25-427c-8b24-bf8af25b73db" path="/var/lib/kubelet/pods/561a8da5-5f25-427c-8b24-bf8af25b73db/volumes" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.233291 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h5p99/crc-debug-wk5nb"] Feb 16 13:59:51 crc kubenswrapper[4799]: E0216 13:59:51.233712 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561a8da5-5f25-427c-8b24-bf8af25b73db" containerName="container-00" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.233733 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="561a8da5-5f25-427c-8b24-bf8af25b73db" containerName="container-00" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.233964 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="561a8da5-5f25-427c-8b24-bf8af25b73db" containerName="container-00" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.234871 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.424620 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-host\") pod \"crc-debug-wk5nb\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.425148 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9wm\" (UniqueName: \"kubernetes.io/projected/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-kube-api-access-sx9wm\") pod \"crc-debug-wk5nb\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.527055 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9wm\" (UniqueName: \"kubernetes.io/projected/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-kube-api-access-sx9wm\") pod \"crc-debug-wk5nb\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.527169 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-host\") pod \"crc-debug-wk5nb\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.527271 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-host\") pod \"crc-debug-wk5nb\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.552789 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9wm\" (UniqueName: \"kubernetes.io/projected/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-kube-api-access-sx9wm\") pod \"crc-debug-wk5nb\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.560718 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.792718 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.793084 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.793160 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.794005 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:59:51 crc kubenswrapper[4799]: I0216 13:59:51.794066 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" gracePeriod=600 Feb 16 13:59:51 crc kubenswrapper[4799]: E0216 13:59:51.913078 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.215344 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" exitCode=0 Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.215412 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94"} Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.215483 4799 scope.go:117] "RemoveContainer" containerID="a46958b187cff675e0284299155342733ca343a8cf2c459bd0a708ded46e17a8" Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.216347 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.216768 4799 generic.go:334] "Generic (PLEG): container finished" podID="e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" containerID="a00263b1990fa42c64d128c120965506314227e0ff0118c7cb401840881c12ad" exitCode=0 Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.216799 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-wk5nb" event={"ID":"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23","Type":"ContainerDied","Data":"a00263b1990fa42c64d128c120965506314227e0ff0118c7cb401840881c12ad"} Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.216846 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/crc-debug-wk5nb" event={"ID":"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23","Type":"ContainerStarted","Data":"4835a6e96523f36673dc39005fbf4a6142b689b4a620ba67c2e0e97ccd5f07f4"} Feb 16 13:59:52 crc kubenswrapper[4799]: E0216 13:59:52.216791 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.330695 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h5p99/crc-debug-wk5nb"] Feb 16 13:59:52 crc kubenswrapper[4799]: I0216 13:59:52.350022 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h5p99/crc-debug-wk5nb"] Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.334539 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.462522 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx9wm\" (UniqueName: \"kubernetes.io/projected/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-kube-api-access-sx9wm\") pod \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.462706 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-host\") pod \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\" (UID: \"e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23\") " Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.462800 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-host" (OuterVolumeSpecName: "host") pod "e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" (UID: "e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.463334 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.468144 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-kube-api-access-sx9wm" (OuterVolumeSpecName: "kube-api-access-sx9wm") pod "e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" (UID: "e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23"). InnerVolumeSpecName "kube-api-access-sx9wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:59:53 crc kubenswrapper[4799]: I0216 13:59:53.565742 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx9wm\" (UniqueName: \"kubernetes.io/projected/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23-kube-api-access-sx9wm\") on node \"crc\" DevicePath \"\"" Feb 16 13:59:54 crc kubenswrapper[4799]: I0216 13:59:54.236456 4799 scope.go:117] "RemoveContainer" containerID="a00263b1990fa42c64d128c120965506314227e0ff0118c7cb401840881c12ad" Feb 16 13:59:54 crc kubenswrapper[4799]: I0216 13:59:54.236682 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/crc-debug-wk5nb" Feb 16 13:59:55 crc kubenswrapper[4799]: I0216 13:59:55.162058 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" path="/var/lib/kubelet/pods/e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23/volumes" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.159622 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5"] Feb 16 14:00:00 crc kubenswrapper[4799]: E0216 14:00:00.160986 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" containerName="container-00" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.161010 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" containerName="container-00" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.161362 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ee9b9d-2bdf-4d54-9bec-a1d227c7be23" containerName="container-00" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.162477 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.165587 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.166199 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.168898 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5"] Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.296742 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-secret-volume\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.296884 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-config-volume\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.296968 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzndj\" (UniqueName: \"kubernetes.io/projected/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-kube-api-access-lzndj\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.398476 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-config-volume\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.398557 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzndj\" (UniqueName: \"kubernetes.io/projected/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-kube-api-access-lzndj\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.398641 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-secret-volume\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.400495 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-config-volume\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.404056 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-secret-volume\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.416111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzndj\" (UniqueName: \"kubernetes.io/projected/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-kube-api-access-lzndj\") pod \"collect-profiles-29520840-cj5b5\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:00 crc kubenswrapper[4799]: I0216 14:00:00.489111 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:01 crc kubenswrapper[4799]: I0216 14:00:01.021678 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5"] Feb 16 14:00:01 crc kubenswrapper[4799]: I0216 14:00:01.308988 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" event={"ID":"8b3071d6-42a3-4fc8-a492-fe9155fa87ad","Type":"ContainerStarted","Data":"7b2b6056b00a75c7249b718e6af2ca01f78694aed531b933d2c67085b9518a15"} Feb 16 14:00:01 crc kubenswrapper[4799]: I0216 14:00:01.309387 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" event={"ID":"8b3071d6-42a3-4fc8-a492-fe9155fa87ad","Type":"ContainerStarted","Data":"4cf6aeaaf0c39d3ec3485a90254e2245deccebb8de50840c3e6d88d9af41d503"} Feb 16 14:00:01 crc kubenswrapper[4799]: I0216 14:00:01.340467 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" podStartSLOduration=1.3404490230000001 podStartE2EDuration="1.340449023s" podCreationTimestamp="2026-02-16 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:00:01.333567598 +0000 UTC m=+5306.926582932" watchObservedRunningTime="2026-02-16 14:00:01.340449023 +0000 UTC m=+5306.933464357" Feb 16 14:00:02 crc kubenswrapper[4799]: I0216 14:00:02.324594 4799 generic.go:334] "Generic (PLEG): container finished" podID="8b3071d6-42a3-4fc8-a492-fe9155fa87ad" containerID="7b2b6056b00a75c7249b718e6af2ca01f78694aed531b933d2c67085b9518a15" exitCode=0 Feb 16 14:00:02 crc kubenswrapper[4799]: I0216 14:00:02.324695 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" event={"ID":"8b3071d6-42a3-4fc8-a492-fe9155fa87ad","Type":"ContainerDied","Data":"7b2b6056b00a75c7249b718e6af2ca01f78694aed531b933d2c67085b9518a15"} Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.698651 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.785074 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-config-volume\") pod \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.785225 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-secret-volume\") pod \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.785289 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzndj\" (UniqueName: \"kubernetes.io/projected/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-kube-api-access-lzndj\") pod \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\" (UID: \"8b3071d6-42a3-4fc8-a492-fe9155fa87ad\") " Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.786020 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "8b3071d6-42a3-4fc8-a492-fe9155fa87ad" (UID: "8b3071d6-42a3-4fc8-a492-fe9155fa87ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.794303 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-kube-api-access-lzndj" (OuterVolumeSpecName: "kube-api-access-lzndj") pod "8b3071d6-42a3-4fc8-a492-fe9155fa87ad" (UID: "8b3071d6-42a3-4fc8-a492-fe9155fa87ad"). InnerVolumeSpecName "kube-api-access-lzndj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.805314 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8b3071d6-42a3-4fc8-a492-fe9155fa87ad" (UID: "8b3071d6-42a3-4fc8-a492-fe9155fa87ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.887787 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.887828 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:03 crc kubenswrapper[4799]: I0216 14:00:03.887840 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzndj\" (UniqueName: \"kubernetes.io/projected/8b3071d6-42a3-4fc8-a492-fe9155fa87ad-kube-api-access-lzndj\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4799]: I0216 14:00:04.149596 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:00:04 crc kubenswrapper[4799]: E0216 14:00:04.150057 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:00:04 crc kubenswrapper[4799]: I0216 14:00:04.347528 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" event={"ID":"8b3071d6-42a3-4fc8-a492-fe9155fa87ad","Type":"ContainerDied","Data":"4cf6aeaaf0c39d3ec3485a90254e2245deccebb8de50840c3e6d88d9af41d503"} Feb 16 14:00:04 crc kubenswrapper[4799]: I0216 14:00:04.347582 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf6aeaaf0c39d3ec3485a90254e2245deccebb8de50840c3e6d88d9af41d503" Feb 16 14:00:04 crc kubenswrapper[4799]: I0216 14:00:04.347595 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-cj5b5" Feb 16 14:00:04 crc kubenswrapper[4799]: I0216 14:00:04.424857 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w"] Feb 16 14:00:04 crc kubenswrapper[4799]: I0216 14:00:04.443976 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-5882w"] Feb 16 14:00:05 crc kubenswrapper[4799]: I0216 14:00:05.165513 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c958a8-65bd-4c54-8136-a8357a69d67b" path="/var/lib/kubelet/pods/21c958a8-65bd-4c54-8136-a8357a69d67b/volumes" Feb 16 14:00:16 crc kubenswrapper[4799]: I0216 14:00:16.149899 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:00:16 crc kubenswrapper[4799]: E0216 14:00:16.150795 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:00:28 crc kubenswrapper[4799]: I0216 14:00:28.711105 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cdd7b58f8-6bxrn_b2510448-629c-43df-9492-a07c96a8b5f0/barbican-api/0.log" Feb 16 14:00:28 crc kubenswrapper[4799]: I0216 14:00:28.746742 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cdd7b58f8-6bxrn_b2510448-629c-43df-9492-a07c96a8b5f0/barbican-api-log/0.log" Feb 16 14:00:28 crc kubenswrapper[4799]: I0216 14:00:28.875027 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5584d58cd8-z4cwc_6cadefef-9278-4473-a8c8-97911ac9b269/barbican-keystone-listener/0.log" Feb 16 14:00:28 crc kubenswrapper[4799]: I0216 14:00:28.938471 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56d6b7fd5c-s6xhs_99699fe4-f20c-42e0-9c4f-029b9ee24fdb/barbican-worker/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.023910 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5584d58cd8-z4cwc_6cadefef-9278-4473-a8c8-97911ac9b269/barbican-keystone-listener-log/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.107019 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56d6b7fd5c-s6xhs_99699fe4-f20c-42e0-9c4f-029b9ee24fdb/barbican-worker-log/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.149303 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:00:29 crc kubenswrapper[4799]: E0216 14:00:29.149572 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.256798 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs_4ea66d5c-7325-440d-816c-c02db1d1bf90/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.367391 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/ceilometer-central-agent/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.477352 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/proxy-httpd/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.503564 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/sg-core/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.581924 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/ceilometer-notification-agent/0.log" Feb 16 14:00:29 crc kubenswrapper[4799]: I0216 14:00:29.751988 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_15c3718e-7e67-4586-8532-6883f43129bd/cinder-api-log/0.log" Feb 16 14:00:30 crc kubenswrapper[4799]: I0216 14:00:30.045333 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ea67e1e3-d03f-49fa-a150-9ff09fca74ba/probe/0.log" Feb 16 14:00:30 crc kubenswrapper[4799]: I0216 14:00:30.182764 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ea67e1e3-d03f-49fa-a150-9ff09fca74ba/cinder-backup/0.log" Feb 16 14:00:30 crc kubenswrapper[4799]: I0216 14:00:30.234467 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_15c3718e-7e67-4586-8532-6883f43129bd/cinder-api/0.log" Feb 16 14:00:30 crc kubenswrapper[4799]: I0216 14:00:30.530905 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0404faed-9e4d-4374-83ef-13dc13839e7b/probe/0.log" Feb 16 14:00:30 crc kubenswrapper[4799]: I0216 14:00:30.553101 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0404faed-9e4d-4374-83ef-13dc13839e7b/cinder-scheduler/0.log" Feb 16 14:00:30 crc kubenswrapper[4799]: I0216 14:00:30.983345 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_64beb0d2-7a13-4a86-b4f8-8843611c254c/probe/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.206556 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_64beb0d2-7a13-4a86-b4f8-8843611c254c/cinder-volume/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.219532 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_5f3698ec-879f-4ead-8ac9-e08fa64c655e/probe/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.268722 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_5f3698ec-879f-4ead-8ac9-e08fa64c655e/cinder-volume/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.440516 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf_e8cd035a-4f87-419c-994a-1ab09e6da101/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.488700 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5_db459b41-b7ab-4982-8889-11233d549c9b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.621832 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76846d67df-2cl9g_77997ea7-755d-40ed-94d6-baab5bd86a9b/init/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.816957 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76846d67df-2cl9g_77997ea7-755d-40ed-94d6-baab5bd86a9b/init/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.985703 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz_ceaa23db-d28e-4d2f-bf84-7336146bfb41/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:31 crc kubenswrapper[4799]: I0216 14:00:31.997214 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76846d67df-2cl9g_77997ea7-755d-40ed-94d6-baab5bd86a9b/dnsmasq-dns/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.180347 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0241ff0c-3747-414a-b48e-72ac52d5836a/glance-log/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.204317 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0241ff0c-3747-414a-b48e-72ac52d5836a/glance-httpd/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.342182 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71e60503-bb2b-452d-a96a-ef5ec0745d94/glance-log/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.386526 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71e60503-bb2b-452d-a96a-ef5ec0745d94/glance-httpd/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.547884 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b64799464-xwrv9_aa66dcb2-43c2-4824-80f8-30911a4a8c72/horizon/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.617217 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8fk67_6ad5bcca-c29e-4594-8698-4a139a80eb92/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:32 crc kubenswrapper[4799]: I0216 14:00:32.831576 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-thrw7_ff2369e0-1189-4a8f-abca-c8db832a8e8c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.054366 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520781-vcnnm_2a2944ce-d43d-455d-81c0-21e082c4c544/keystone-cron/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.067997 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b64799464-xwrv9_aa66dcb2-43c2-4824-80f8-30911a4a8c72/horizon-log/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.244488 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_11134cac-9930-424d-8a67-69a6ba98ff21/kube-state-metrics/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.331168 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74bd488478-wqpd6_f3bee5f6-a064-4641-9a90-de58c60eb3aa/keystone-api/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.338460 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z_c895c98f-f5b4-4f98-b498-fe07218cad2f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.816508 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bd85f5c47-gbtmk_cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd/neutron-api/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.895154 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh_fbfe848b-c120-4ca7-993f-47c1e3902ed1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:33 crc kubenswrapper[4799]: I0216 14:00:33.941070 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bd85f5c47-gbtmk_cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd/neutron-httpd/0.log" Feb 16 14:00:34 crc kubenswrapper[4799]: I0216 14:00:34.494336 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6/nova-cell0-conductor-conductor/0.log" Feb 16 14:00:34 crc kubenswrapper[4799]: I0216 14:00:34.766419 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_47764882-7881-4fbd-b682-c75a79736dea/nova-cell1-conductor-conductor/0.log" Feb 16 14:00:35 crc kubenswrapper[4799]: I0216 14:00:35.111515 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fa473e85-e345-4e62-b615-b9fc5b5ac754/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 14:00:35 crc kubenswrapper[4799]: I0216 14:00:35.485900 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-zr78d_9ecaed67-149c-4202-b3c9-c186d68a4b9a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:35 crc kubenswrapper[4799]: I0216 14:00:35.527007 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b1e557e-1e13-4d03-a4b9-fddccf7fc783/nova-api-log/0.log" Feb 16 14:00:35 crc kubenswrapper[4799]: I0216 14:00:35.876050 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_14e134b2-1c07-4a20-9bc6-ea4c75878094/nova-metadata-log/0.log" Feb 16 14:00:35 crc kubenswrapper[4799]: I0216 14:00:35.948822 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b1e557e-1e13-4d03-a4b9-fddccf7fc783/nova-api-api/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.254562 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_06ddc5ff-d6d1-4997-8763-e97603e7df10/mysql-bootstrap/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.354353 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53/nova-scheduler-scheduler/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.443503 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_06ddc5ff-d6d1-4997-8763-e97603e7df10/mysql-bootstrap/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.519234 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_06ddc5ff-d6d1-4997-8763-e97603e7df10/galera/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.703476 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_19d52513-0bac-433d-8167-3abd90820fff/mysql-bootstrap/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.933668 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_19d52513-0bac-433d-8167-3abd90820fff/mysql-bootstrap/0.log" Feb 16 14:00:36 crc kubenswrapper[4799]: I0216 14:00:36.996994 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_19d52513-0bac-433d-8167-3abd90820fff/galera/0.log" Feb 16 14:00:37 crc kubenswrapper[4799]: I0216 14:00:37.108312 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e024c88-16fc-4003-bc76-165ac4445e8f/openstackclient/0.log" Feb 16 14:00:37 crc kubenswrapper[4799]: I0216 14:00:37.213251 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cbnmk_c4e49631-ab2b-49a4-befb-ccc2df5a47c4/openstack-network-exporter/0.log" Feb 16 14:00:37 crc kubenswrapper[4799]: I0216 14:00:37.979824 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_14e134b2-1c07-4a20-9bc6-ea4c75878094/nova-metadata-metadata/0.log" Feb 16 14:00:37 crc kubenswrapper[4799]: I0216 14:00:37.996133 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovsdb-server-init/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.218759 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovsdb-server-init/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.223655 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovsdb-server/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.457083 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wr6ph_d0a8e986-71a6-47cc-a34e-ddc323df4af4/ovn-controller/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.545076 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovs-vswitchd/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.579324 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hpddx_e3f7c5d7-95f5-4b8b-9a17-99c4a179064e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.709232 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68382ea2-c66d-4ea6-be55-f77490a81898/openstack-network-exporter/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.759702 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68382ea2-c66d-4ea6-be55-f77490a81898/ovn-northd/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.926888 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7/ovsdbserver-nb/0.log" Feb 16 14:00:38 crc kubenswrapper[4799]: I0216 14:00:38.972868 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7/openstack-network-exporter/0.log" Feb 16 14:00:39 crc kubenswrapper[4799]: I0216 14:00:39.047406 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b93c98d8-9585-4406-8d4f-54ebdb84ee2d/openstack-network-exporter/0.log" Feb 16 14:00:39 crc kubenswrapper[4799]: I0216 14:00:39.575070 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b93c98d8-9585-4406-8d4f-54ebdb84ee2d/ovsdbserver-sb/0.log" Feb 16 14:00:39 crc kubenswrapper[4799]: I0216 14:00:39.817680 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f58d8f5db-4k8dn_d2c303ca-c915-4f80-90b2-5e23882687b5/placement-api/0.log" Feb 16 14:00:39 crc kubenswrapper[4799]: I0216 14:00:39.916110 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f58d8f5db-4k8dn_d2c303ca-c915-4f80-90b2-5e23882687b5/placement-log/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.106246 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/init-config-reloader/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.149623 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:00:40 crc kubenswrapper[4799]: E0216 14:00:40.150275 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.200886 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/init-config-reloader/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.274978 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/prometheus/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.277382 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/config-reloader/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.322194 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/thanos-sidecar/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.510926 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52adb145-1b05-4515-a214-83731e3504b4/setup-container/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.702861 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52adb145-1b05-4515-a214-83731e3504b4/setup-container/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.758586 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52adb145-1b05-4515-a214-83731e3504b4/rabbitmq/0.log" Feb 16 14:00:40 crc kubenswrapper[4799]: I0216 14:00:40.795424 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5b6ff320-8742-454a-9a6e-766db7e2c3a8/setup-container/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.034443 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5b6ff320-8742-454a-9a6e-766db7e2c3a8/setup-container/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.130728 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5b6ff320-8742-454a-9a6e-766db7e2c3a8/rabbitmq/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.138435 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a6be377-3c2d-46ab-a9b1-3faa91644a58/setup-container/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.386719 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a6be377-3c2d-46ab-a9b1-3faa91644a58/setup-container/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.427702 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a6be377-3c2d-46ab-a9b1-3faa91644a58/rabbitmq/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.487750 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v2558_cb5e39c0-c809-4971-a2ea-f2a01d9f4493/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.653937 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8xh9d_4e06d186-e0e8-4b62-8e6a-087d37dbd8c5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.738391 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8_7cc337e4-c7f3-47cd-bd87-4d6230d8efcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.867045 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f68cb9f4-b04b-4b52-92e0-153239877a17/memcached/0.log" Feb 16 14:00:41 crc kubenswrapper[4799]: I0216 14:00:41.958193 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2kzk7_b7657976-4772-4623-b14e-c9de2130efa5/ssh-known-hosts-edpm-deployment/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.003587 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bgvxk_bfb29f60-f76e-40d0-b672-ae1be3eb5c84/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.270262 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f54946f5f-2jrb5_441c04e7-2794-48cf-bc03-4c13536d22c4/proxy-httpd/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.270315 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j6ghf_e330eb09-5b74-44cd-9812-1aaada5f979c/swift-ring-rebalance/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.288919 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f54946f5f-2jrb5_441c04e7-2794-48cf-bc03-4c13536d22c4/proxy-server/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.494037 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-auditor/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.508338 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-reaper/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.563062 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-server/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.591903 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-replicator/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.731599 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-auditor/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.769341 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-replicator/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.792390 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-server/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.800920 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-updater/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.831326 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-auditor/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.927289 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-expirer/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.974826 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-replicator/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.982066 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-updater/0.log" Feb 16 14:00:42 crc kubenswrapper[4799]: I0216 14:00:42.996034 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-server/0.log" Feb 16 14:00:43 crc kubenswrapper[4799]: I0216 14:00:43.065472 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/rsync/0.log" Feb 16 14:00:43 crc kubenswrapper[4799]: I0216 14:00:43.129681 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/swift-recon-cron/0.log" Feb 16 14:00:43 crc kubenswrapper[4799]: I0216 14:00:43.230514 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9_8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:43 crc kubenswrapper[4799]: I0216 14:00:43.322982 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd/tempest-tests-tempest-tests-runner/0.log" Feb 16 14:00:43 crc kubenswrapper[4799]: I0216 14:00:43.428644 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ba576df3-d525-4b57-9913-4c2c86246682/test-operator-logs-container/0.log" Feb 16 14:00:43 crc kubenswrapper[4799]: I0216 14:00:43.477398 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jjr27_9dd7738f-7fe5-4522-94a5-afa6cf94a54d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:00:44 crc kubenswrapper[4799]: I0216 14:00:44.268189 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_9bd018cf-77c0-4f89-a1b7-e821440b0fe1/watcher-applier/0.log" Feb 16 14:00:44 crc kubenswrapper[4799]: I0216 14:00:44.970440 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9dddb140-3f08-4b16-97bf-be71806e7add/watcher-api-log/0.log" Feb 16 14:00:45 crc kubenswrapper[4799]: I0216 14:00:45.185655 4799 scope.go:117] "RemoveContainer" containerID="614e5c11ad6d4723da3490c631afd77844d96f43156bfad5654994db3fb07fc4" Feb 16 14:00:47 crc kubenswrapper[4799]: I0216 14:00:47.088994 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_a15e35f6-4998-4a70-9f95-272ba07a39ef/watcher-decision-engine/0.log" Feb 16 14:00:47 crc kubenswrapper[4799]: I0216 14:00:47.528553 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9dddb140-3f08-4b16-97bf-be71806e7add/watcher-api/0.log" Feb 16 14:00:53 crc kubenswrapper[4799]: I0216 14:00:53.150360 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:00:53 crc kubenswrapper[4799]: E0216 14:00:53.151783 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.154329 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520841-vcz9g"] Feb 16 14:01:00 crc kubenswrapper[4799]: E0216 14:01:00.155482 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3071d6-42a3-4fc8-a492-fe9155fa87ad" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.155505 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3071d6-42a3-4fc8-a492-fe9155fa87ad" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.155779 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3071d6-42a3-4fc8-a492-fe9155fa87ad" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.156805 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.165044 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-vcz9g"] Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.331638 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xjl\" (UniqueName: \"kubernetes.io/projected/d909d2d9-21eb-4176-9378-dbba67a87b93-kube-api-access-g6xjl\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.332063 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-fernet-keys\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.332271 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-combined-ca-bundle\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.332318 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-config-data\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.434305 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-combined-ca-bundle\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.434383 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-config-data\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.434472 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xjl\" (UniqueName: \"kubernetes.io/projected/d909d2d9-21eb-4176-9378-dbba67a87b93-kube-api-access-g6xjl\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.434729 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-fernet-keys\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.613952 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-fernet-keys\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.615723 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xjl\" (UniqueName: \"kubernetes.io/projected/d909d2d9-21eb-4176-9378-dbba67a87b93-kube-api-access-g6xjl\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.616970 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-config-data\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.619875 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-combined-ca-bundle\") pod \"keystone-cron-29520841-vcz9g\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:00 crc kubenswrapper[4799]: I0216 14:01:00.817485 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:01 crc kubenswrapper[4799]: I0216 14:01:01.456160 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-vcz9g"] Feb 16 14:01:01 crc kubenswrapper[4799]: W0216 14:01:01.458744 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd909d2d9_21eb_4176_9378_dbba67a87b93.slice/crio-45fe14e26e9a8ec6e7e5aef449aad01fd850dd4dc58a4b13de75629356d7f0d4 WatchSource:0}: Error finding container 45fe14e26e9a8ec6e7e5aef449aad01fd850dd4dc58a4b13de75629356d7f0d4: Status 404 returned error can't find the container with id 45fe14e26e9a8ec6e7e5aef449aad01fd850dd4dc58a4b13de75629356d7f0d4 Feb 16 14:01:01 crc kubenswrapper[4799]: I0216 14:01:01.896039 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-vcz9g" event={"ID":"d909d2d9-21eb-4176-9378-dbba67a87b93","Type":"ContainerStarted","Data":"5dbd89f014dc69738ea63f416d6992d8f3eaccfac5aabed4048ae1314e8b3a96"} Feb 16 14:01:01 crc kubenswrapper[4799]: I0216 14:01:01.896368 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-vcz9g" event={"ID":"d909d2d9-21eb-4176-9378-dbba67a87b93","Type":"ContainerStarted","Data":"45fe14e26e9a8ec6e7e5aef449aad01fd850dd4dc58a4b13de75629356d7f0d4"} Feb 16 14:01:01 crc kubenswrapper[4799]: I0216 14:01:01.921027 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520841-vcz9g" podStartSLOduration=1.921004943 podStartE2EDuration="1.921004943s" podCreationTimestamp="2026-02-16 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:01:01.911626927 +0000 UTC m=+5367.504642271" watchObservedRunningTime="2026-02-16 14:01:01.921004943 +0000 UTC m=+5367.514020267" Feb 16 14:01:05 crc kubenswrapper[4799]: I0216 14:01:05.934363 4799 generic.go:334] "Generic (PLEG): container finished" podID="d909d2d9-21eb-4176-9378-dbba67a87b93" containerID="5dbd89f014dc69738ea63f416d6992d8f3eaccfac5aabed4048ae1314e8b3a96" exitCode=0 Feb 16 14:01:05 crc kubenswrapper[4799]: I0216 14:01:05.935083 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-vcz9g" event={"ID":"d909d2d9-21eb-4176-9378-dbba67a87b93","Type":"ContainerDied","Data":"5dbd89f014dc69738ea63f416d6992d8f3eaccfac5aabed4048ae1314e8b3a96"} Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.337983 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.495577 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6xjl\" (UniqueName: \"kubernetes.io/projected/d909d2d9-21eb-4176-9378-dbba67a87b93-kube-api-access-g6xjl\") pod \"d909d2d9-21eb-4176-9378-dbba67a87b93\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.495626 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-combined-ca-bundle\") pod \"d909d2d9-21eb-4176-9378-dbba67a87b93\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.495692 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-config-data\") pod \"d909d2d9-21eb-4176-9378-dbba67a87b93\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.495758 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-fernet-keys\") pod \"d909d2d9-21eb-4176-9378-dbba67a87b93\" (UID: \"d909d2d9-21eb-4176-9378-dbba67a87b93\") " Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.502377 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d909d2d9-21eb-4176-9378-dbba67a87b93" (UID: "d909d2d9-21eb-4176-9378-dbba67a87b93"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.502453 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d909d2d9-21eb-4176-9378-dbba67a87b93-kube-api-access-g6xjl" (OuterVolumeSpecName: "kube-api-access-g6xjl") pod "d909d2d9-21eb-4176-9378-dbba67a87b93" (UID: "d909d2d9-21eb-4176-9378-dbba67a87b93"). InnerVolumeSpecName "kube-api-access-g6xjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.526634 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d909d2d9-21eb-4176-9378-dbba67a87b93" (UID: "d909d2d9-21eb-4176-9378-dbba67a87b93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.570262 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-config-data" (OuterVolumeSpecName: "config-data") pod "d909d2d9-21eb-4176-9378-dbba67a87b93" (UID: "d909d2d9-21eb-4176-9378-dbba67a87b93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.597862 4799 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.597895 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6xjl\" (UniqueName: \"kubernetes.io/projected/d909d2d9-21eb-4176-9378-dbba67a87b93-kube-api-access-g6xjl\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.597908 4799 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.597919 4799 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d909d2d9-21eb-4176-9378-dbba67a87b93-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.952342 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-vcz9g" event={"ID":"d909d2d9-21eb-4176-9378-dbba67a87b93","Type":"ContainerDied","Data":"45fe14e26e9a8ec6e7e5aef449aad01fd850dd4dc58a4b13de75629356d7f0d4"} Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.952702 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45fe14e26e9a8ec6e7e5aef449aad01fd850dd4dc58a4b13de75629356d7f0d4" Feb 16 14:01:07 crc kubenswrapper[4799]: I0216 14:01:07.952390 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-vcz9g" Feb 16 14:01:08 crc kubenswrapper[4799]: I0216 14:01:08.150017 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:01:08 crc kubenswrapper[4799]: E0216 14:01:08.150337 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:01:13 crc kubenswrapper[4799]: I0216 14:01:13.566506 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/util/0.log" Feb 16 14:01:13 crc kubenswrapper[4799]: I0216 14:01:13.948184 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/util/0.log" Feb 16 14:01:13 crc kubenswrapper[4799]: I0216 14:01:13.948542 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/pull/0.log" Feb 16 14:01:13 crc kubenswrapper[4799]: I0216 14:01:13.949020 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/pull/0.log" Feb 16 14:01:14 crc kubenswrapper[4799]: I0216 14:01:14.136742 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/util/0.log" Feb 16 14:01:14 crc kubenswrapper[4799]: I0216 14:01:14.180352 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/pull/0.log" Feb 16 14:01:14 crc kubenswrapper[4799]: I0216 14:01:14.184380 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/extract/0.log" Feb 16 14:01:14 crc kubenswrapper[4799]: I0216 14:01:14.674911 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-ddwg6_5cc692f7-262b-4ffa-b259-69f665422e8d/manager/0.log" Feb 16 14:01:15 crc kubenswrapper[4799]: I0216 14:01:15.021779 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68c6d499cb-z9x44_c8106c68-2300-410d-94fc-5dc71651dba5/manager/0.log" Feb 16 14:01:15 crc kubenswrapper[4799]: I0216 14:01:15.119131 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-cq9hr_b286a989-7544-4596-bb1b-f06469aedbdc/manager/0.log" Feb 16 14:01:15 crc kubenswrapper[4799]: I0216 14:01:15.349174 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-m6t96_3278a4bc-c2fa-4672-9a31-f53b0e95dbcd/manager/0.log" Feb 16 14:01:15 crc kubenswrapper[4799]: I0216 14:01:15.918238 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-lwlqz_f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf/manager/0.log" Feb 16 14:01:16 crc kubenswrapper[4799]: I0216 14:01:16.184182 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-gt66t_ae60b108-5e33-408f-a861-8e2e1e9ab643/manager/0.log" Feb 16 14:01:16 crc kubenswrapper[4799]: I0216 14:01:16.483579 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-686fx_9ec15942-7ca3-444c-a096-a23c21b701ed/manager/0.log" Feb 16 14:01:16 crc kubenswrapper[4799]: I0216 14:01:16.673621 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-96fff9cb8-jb5fm_fb144fe6-dbb4-492a-acb1-b642ea0a20f0/manager/0.log" Feb 16 14:01:16 crc kubenswrapper[4799]: I0216 14:01:16.913675 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-dqssm_1c684efb-e592-4c17-a896-897b466cd387/manager/0.log" Feb 16 14:01:16 crc kubenswrapper[4799]: I0216 14:01:16.972366 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-57746b5ff9-zh76r_b7dcb594-1126-4b75-8f5d-d2b5edc9ccad/manager/0.log" Feb 16 14:01:17 crc kubenswrapper[4799]: I0216 14:01:17.259506 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-g4fg8_8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379/manager/0.log" Feb 16 14:01:17 crc kubenswrapper[4799]: I0216 14:01:17.392189 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-8r6qg_17536931-400e-4131-8992-a30c2ebda385/manager/0.log" Feb 16 14:01:17 crc kubenswrapper[4799]: I0216 14:01:17.596833 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5_3469cc9e-8b93-4c52-957a-78b91019767d/manager/0.log" Feb 16 14:01:18 crc kubenswrapper[4799]: I0216 14:01:18.025076 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7678556f8f-7z95t_e414b45d-e5dd-4905-9f69-781ec6e6d824/operator/0.log" Feb 16 14:01:18 crc kubenswrapper[4799]: I0216 14:01:18.209068 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pvc2p_29da4bf2-657a-4d9d-b61b-788ef89d4b19/registry-server/0.log" Feb 16 14:01:18 crc kubenswrapper[4799]: I0216 14:01:18.465045 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-5trbx_12dbbffb-b10a-4b02-9698-fa66c5ff9451/manager/0.log" Feb 16 14:01:18 crc kubenswrapper[4799]: I0216 14:01:18.676534 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-rv7cl_1328d15a-4b40-4db9-b0f8-0c8490e623b9/manager/0.log" Feb 16 14:01:18 crc kubenswrapper[4799]: I0216 14:01:18.875407 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hrpbx_692956be-1d06-489c-9a30-0f7e4e144caa/operator/0.log" Feb 16 14:01:19 crc kubenswrapper[4799]: I0216 14:01:19.141838 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-6fhfw_bd478887-eb50-4e9c-8933-7b513c323cac/manager/0.log" Feb 16 14:01:19 crc kubenswrapper[4799]: I0216 14:01:19.579722 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-lz8sd_12e59839-c074-42ea-84e6-1be9b5a261ad/manager/0.log" Feb 16 14:01:19 crc kubenswrapper[4799]: I0216 14:01:19.715320 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-fhf99_7333b2fd-d81d-4daa-965a-3d5fefca8863/manager/0.log" Feb 16 14:01:20 crc kubenswrapper[4799]: I0216 14:01:20.099424 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f65d44ccf-htwqf_0935892b-89a7-4b63-8012-dbe285c5a2f3/manager/0.log" Feb 16 14:01:20 crc kubenswrapper[4799]: I0216 14:01:20.180536 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-667bdd5bc9-lpnbm_1e501664-2258-45c7-8934-7f953c7fc799/manager/0.log" Feb 16 14:01:20 crc kubenswrapper[4799]: I0216 14:01:20.436774 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-4g8xm_ec674ea8-aa42-4917-906f-9a9b098ba2c0/manager/0.log" Feb 16 14:01:23 crc kubenswrapper[4799]: I0216 14:01:23.149824 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:01:23 crc kubenswrapper[4799]: E0216 14:01:23.150772 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:01:26 crc kubenswrapper[4799]: I0216 14:01:26.650497 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-lzptd_e555e0d9-b9d6-4e25-ad40-c6d9c1cae800/manager/0.log" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.710421 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgrq7"] Feb 16 14:01:32 crc kubenswrapper[4799]: E0216 14:01:32.711965 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d909d2d9-21eb-4176-9378-dbba67a87b93" containerName="keystone-cron" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.711992 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d909d2d9-21eb-4176-9378-dbba67a87b93" containerName="keystone-cron" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.712382 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d909d2d9-21eb-4176-9378-dbba67a87b93" containerName="keystone-cron" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.715390 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.726649 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgrq7"] Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.793804 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-utilities\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.793865 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-catalog-content\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.794080 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grpt\" (UniqueName: \"kubernetes.io/projected/5d3627ae-1204-409f-95a2-f2628087a22d-kube-api-access-9grpt\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.896210 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grpt\" (UniqueName: \"kubernetes.io/projected/5d3627ae-1204-409f-95a2-f2628087a22d-kube-api-access-9grpt\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.896345 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-utilities\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.896387 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-catalog-content\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.897048 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-catalog-content\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.897094 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-utilities\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:32 crc kubenswrapper[4799]: I0216 14:01:32.918222 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grpt\" (UniqueName: \"kubernetes.io/projected/5d3627ae-1204-409f-95a2-f2628087a22d-kube-api-access-9grpt\") pod \"redhat-marketplace-rgrq7\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:33 crc kubenswrapper[4799]: I0216 14:01:33.044538 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:33 crc kubenswrapper[4799]: I0216 14:01:33.532557 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgrq7"] Feb 16 14:01:34 crc kubenswrapper[4799]: I0216 14:01:34.210479 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d3627ae-1204-409f-95a2-f2628087a22d" containerID="b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda" exitCode=0 Feb 16 14:01:34 crc kubenswrapper[4799]: I0216 14:01:34.210611 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerDied","Data":"b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda"} Feb 16 14:01:34 crc kubenswrapper[4799]: I0216 14:01:34.210842 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerStarted","Data":"2569c3ef95879563e8d69131b873b6a5dcb726954f472d8b65e368fbec122dcf"} Feb 16 14:01:35 crc kubenswrapper[4799]: I0216 14:01:35.223522 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerStarted","Data":"f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4"} Feb 16 14:01:36 crc kubenswrapper[4799]: I0216 14:01:36.234249 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d3627ae-1204-409f-95a2-f2628087a22d" containerID="f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4" exitCode=0 Feb 16 14:01:36 crc kubenswrapper[4799]: I0216 14:01:36.234335 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerDied","Data":"f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4"} Feb 16 14:01:37 crc kubenswrapper[4799]: I0216 14:01:37.245428 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerStarted","Data":"197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb"} Feb 16 14:01:37 crc kubenswrapper[4799]: I0216 14:01:37.268320 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgrq7" podStartSLOduration=2.807482143 podStartE2EDuration="5.268295188s" podCreationTimestamp="2026-02-16 14:01:32 +0000 UTC" firstStartedPulling="2026-02-16 14:01:34.214207483 +0000 UTC m=+5399.807222817" lastFinishedPulling="2026-02-16 14:01:36.675020488 +0000 UTC m=+5402.268035862" observedRunningTime="2026-02-16 14:01:37.260640821 +0000 UTC m=+5402.853656155" watchObservedRunningTime="2026-02-16 14:01:37.268295188 +0000 UTC m=+5402.861310522" Feb 16 14:01:38 crc kubenswrapper[4799]: I0216 14:01:38.150198 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:01:38 crc kubenswrapper[4799]: E0216 14:01:38.150770 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:01:42 crc kubenswrapper[4799]: I0216 14:01:42.965634 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-swx86_a6d10e0e-6088-4be2-90a6-5ea568d7ce25/control-plane-machine-set-operator/0.log" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.045446 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.045940 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.138653 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lds8_12ef62d5-7675-44bf-a2e9-53093b004126/machine-api-operator/0.log" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.175087 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lds8_12ef62d5-7675-44bf-a2e9-53093b004126/kube-rbac-proxy/0.log" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.654969 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.710595 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:43 crc kubenswrapper[4799]: I0216 14:01:43.895219 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgrq7"] Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.334538 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgrq7" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="registry-server" containerID="cri-o://197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb" gracePeriod=2 Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.789951 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.878470 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-utilities\") pod \"5d3627ae-1204-409f-95a2-f2628087a22d\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.878517 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-catalog-content\") pod \"5d3627ae-1204-409f-95a2-f2628087a22d\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.878586 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9grpt\" (UniqueName: \"kubernetes.io/projected/5d3627ae-1204-409f-95a2-f2628087a22d-kube-api-access-9grpt\") pod \"5d3627ae-1204-409f-95a2-f2628087a22d\" (UID: \"5d3627ae-1204-409f-95a2-f2628087a22d\") " Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.879333 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-utilities" (OuterVolumeSpecName: "utilities") pod "5d3627ae-1204-409f-95a2-f2628087a22d" (UID: "5d3627ae-1204-409f-95a2-f2628087a22d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.905154 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3627ae-1204-409f-95a2-f2628087a22d-kube-api-access-9grpt" (OuterVolumeSpecName: "kube-api-access-9grpt") pod "5d3627ae-1204-409f-95a2-f2628087a22d" (UID: "5d3627ae-1204-409f-95a2-f2628087a22d"). InnerVolumeSpecName "kube-api-access-9grpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.910675 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d3627ae-1204-409f-95a2-f2628087a22d" (UID: "5d3627ae-1204-409f-95a2-f2628087a22d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.981723 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.981767 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d3627ae-1204-409f-95a2-f2628087a22d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:45 crc kubenswrapper[4799]: I0216 14:01:45.981784 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9grpt\" (UniqueName: \"kubernetes.io/projected/5d3627ae-1204-409f-95a2-f2628087a22d-kube-api-access-9grpt\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.352935 4799 generic.go:334] "Generic (PLEG): container finished" podID="5d3627ae-1204-409f-95a2-f2628087a22d" containerID="197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb" exitCode=0 Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.353059 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerDied","Data":"197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb"} Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.353371 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgrq7" event={"ID":"5d3627ae-1204-409f-95a2-f2628087a22d","Type":"ContainerDied","Data":"2569c3ef95879563e8d69131b873b6a5dcb726954f472d8b65e368fbec122dcf"} Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.353399 4799 scope.go:117] "RemoveContainer" containerID="197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.353078 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgrq7" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.379055 4799 scope.go:117] "RemoveContainer" containerID="f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.407050 4799 scope.go:117] "RemoveContainer" containerID="b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.411413 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgrq7"] Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.423837 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgrq7"] Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.483936 4799 scope.go:117] "RemoveContainer" containerID="197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb" Feb 16 14:01:46 crc kubenswrapper[4799]: E0216 14:01:46.484280 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb\": container with ID starting with 197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb not found: ID does not exist" containerID="197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.484310 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb"} err="failed to get container status \"197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb\": rpc error: code = NotFound desc = could not find container \"197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb\": container with ID starting with 197698a4284b61ec25f5c903ad1653e54456d39d5775eaacba97e856f37e35cb not found: ID does not exist" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.484330 4799 scope.go:117] "RemoveContainer" containerID="f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4" Feb 16 14:01:46 crc kubenswrapper[4799]: E0216 14:01:46.484564 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4\": container with ID starting with f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4 not found: ID does not exist" containerID="f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.484584 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4"} err="failed to get container status \"f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4\": rpc error: code = NotFound desc = could not find container \"f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4\": container with ID starting with f8ef69ba36805a8a6501978751205f0e03c5f8bcbb1b11ef8f6e1f10e32712a4 not found: ID does not exist" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.484596 4799 scope.go:117] "RemoveContainer" containerID="b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda" Feb 16 14:01:46 crc kubenswrapper[4799]: E0216 14:01:46.484743 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda\": container with ID starting with b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda not found: ID does not exist" containerID="b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda" Feb 16 14:01:46 crc kubenswrapper[4799]: I0216 14:01:46.484765 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda"} err="failed to get container status \"b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda\": rpc error: code = NotFound desc = could not find container \"b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda\": container with ID starting with b72dec262093f5ab134d48841da66c4f4fd19dd0338802dda5ba6f31cc4d3fda not found: ID does not exist" Feb 16 14:01:47 crc kubenswrapper[4799]: I0216 14:01:47.161391 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" path="/var/lib/kubelet/pods/5d3627ae-1204-409f-95a2-f2628087a22d/volumes" Feb 16 14:01:52 crc kubenswrapper[4799]: I0216 14:01:52.150485 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:01:52 crc kubenswrapper[4799]: E0216 14:01:52.152461 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:01:56 crc kubenswrapper[4799]: I0216 14:01:56.191309 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hcks5_4ce49784-a833-4d3a-8101-9618730dd5c7/cert-manager-controller/0.log" Feb 16 14:01:56 crc kubenswrapper[4799]: I0216 14:01:56.347560 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-kwbcb_d2d7275d-595b-44d8-afc7-8df5bb4b8e18/cert-manager-cainjector/0.log" Feb 16 14:01:56 crc kubenswrapper[4799]: I0216 14:01:56.381292 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-p9txt_75520423-f121-446d-8ad2-d0bfc440fd76/cert-manager-webhook/0.log" Feb 16 14:02:07 crc kubenswrapper[4799]: I0216 14:02:07.153547 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:02:07 crc kubenswrapper[4799]: E0216 14:02:07.154386 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:02:11 crc kubenswrapper[4799]: I0216 14:02:11.950626 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-x5r6j_dd3fb402-ea08-43d2-a79b-81e50caac303/nmstate-console-plugin/0.log" Feb 16 14:02:12 crc kubenswrapper[4799]: I0216 14:02:12.149825 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8zffw_cc1669bc-8a99-4bd8-979a-59d07b2cc876/nmstate-handler/0.log" Feb 16 14:02:12 crc kubenswrapper[4799]: I0216 14:02:12.151775 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-prbbx_3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd/kube-rbac-proxy/0.log" Feb 16 14:02:12 crc kubenswrapper[4799]: I0216 14:02:12.257983 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-prbbx_3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd/nmstate-metrics/0.log" Feb 16 14:02:12 crc kubenswrapper[4799]: I0216 14:02:12.352111 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-9fd4k_a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660/nmstate-operator/0.log" Feb 16 14:02:12 crc kubenswrapper[4799]: I0216 14:02:12.451811 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-v55q4_ea8a1c06-85d6-40e1-933d-163d4247f147/nmstate-webhook/0.log" Feb 16 14:02:19 crc kubenswrapper[4799]: I0216 14:02:19.149654 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:02:19 crc kubenswrapper[4799]: E0216 14:02:19.150798 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:02:26 crc kubenswrapper[4799]: I0216 14:02:26.611670 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-l48qr_ac6a624e-f6f1-44b4-b236-99307dfc75b3/prometheus-operator/0.log" Feb 16 14:02:26 crc kubenswrapper[4799]: I0216 14:02:26.813302 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr_956b64fb-674a-40a6-be9b-b249d5b03aab/prometheus-operator-admission-webhook/0.log" Feb 16 14:02:26 crc kubenswrapper[4799]: I0216 14:02:26.853668 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8_25240a98-4447-4af0-89d7-8868fed65af8/prometheus-operator-admission-webhook/0.log" Feb 16 14:02:27 crc kubenswrapper[4799]: I0216 14:02:27.020777 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9kr64_1f31c8ae-d209-4bed-8ed7-f568f713bd15/operator/0.log" Feb 16 14:02:27 crc kubenswrapper[4799]: I0216 14:02:27.077766 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fp4wv_ae279f38-d065-46a1-adb4-671588c18906/perses-operator/0.log" Feb 16 14:02:30 crc kubenswrapper[4799]: I0216 14:02:30.150474 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:02:30 crc kubenswrapper[4799]: E0216 14:02:30.151014 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.088895 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-4djwq_c54deb12-6083-4890-ab2d-20c5cede1547/kube-rbac-proxy/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.222256 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-4djwq_c54deb12-6083-4890-ab2d-20c5cede1547/controller/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.307588 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.520096 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.523621 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.556787 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.563266 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.710925 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.732315 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.740742 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.757283 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.959568 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.966518 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/controller/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.976288 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:02:40 crc kubenswrapper[4799]: I0216 14:02:40.998038 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.154266 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/kube-rbac-proxy/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.183323 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/frr-metrics/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.208496 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/kube-rbac-proxy-frr/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.408003 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/reloader/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.483098 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-qrqgr_4c963766-8661-4a44-8416-f0202f10fafb/frr-k8s-webhook-server/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.712219 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c7df86bbf-sjqnz_4af8dbaa-4279-4669-ac62-b78ae77d4063/manager/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.835611 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67d76b6b75-prfvg_11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3/webhook-server/0.log" Feb 16 14:02:41 crc kubenswrapper[4799]: I0216 14:02:41.965455 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jcvfs_00530bae-1878-49a9-876f-97b521db61cd/kube-rbac-proxy/0.log" Feb 16 14:02:42 crc kubenswrapper[4799]: I0216 14:02:42.582454 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jcvfs_00530bae-1878-49a9-876f-97b521db61cd/speaker/0.log" Feb 16 14:02:42 crc kubenswrapper[4799]: I0216 14:02:42.922423 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/frr/0.log" Feb 16 14:02:45 crc kubenswrapper[4799]: I0216 14:02:45.158640 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:02:45 crc kubenswrapper[4799]: E0216 14:02:45.161932 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:02:55 crc kubenswrapper[4799]: I0216 14:02:55.870226 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/util/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.142009 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/util/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.177468 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/pull/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.221344 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/pull/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.440694 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/pull/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.441378 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/extract/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.457716 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/util/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.613488 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/util/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.806778 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/util/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.807015 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/pull/0.log" Feb 16 14:02:56 crc kubenswrapper[4799]: I0216 14:02:56.811879 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/pull/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.011095 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/extract/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.025716 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/util/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.038181 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/pull/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.208397 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-utilities/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.408576 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-utilities/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.507836 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-content/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.534657 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-content/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.637279 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-content/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.643012 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-utilities/0.log" Feb 16 14:02:57 crc kubenswrapper[4799]: I0216 14:02:57.846382 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-utilities/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.149585 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-content/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.204186 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-utilities/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.216407 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-content/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.268887 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/registry-server/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.419152 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-content/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.424148 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-utilities/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.680575 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/util/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.830159 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/registry-server/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.901921 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/pull/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.945688 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/pull/0.log" Feb 16 14:02:58 crc kubenswrapper[4799]: I0216 14:02:58.951062 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/util/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.052731 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/util/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.110678 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/pull/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.197330 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/extract/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.231566 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qb8p5_a8b56ef0-6df7-4a6a-a550-b0699ebaf909/marketplace-operator/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.338935 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-utilities/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.776428 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-content/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.781997 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-utilities/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.821221 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-content/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.963488 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-utilities/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.968501 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-content/0.log" Feb 16 14:02:59 crc kubenswrapper[4799]: I0216 14:02:59.998058 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-utilities/0.log" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.131980 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/registry-server/0.log" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.149358 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:03:00 crc kubenswrapper[4799]: E0216 14:03:00.149689 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.268580 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-utilities/0.log" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.269719 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nl5pj"] Feb 16 14:03:00 crc kubenswrapper[4799]: E0216 14:03:00.270105 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="extract-utilities" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.270134 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="extract-utilities" Feb 16 14:03:00 crc kubenswrapper[4799]: E0216 14:03:00.270171 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="registry-server" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.270178 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="registry-server" Feb 16 14:03:00 crc kubenswrapper[4799]: E0216 14:03:00.270192 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="extract-content" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.270199 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="extract-content" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.270396 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3627ae-1204-409f-95a2-f2628087a22d" containerName="registry-server" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.271851 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.281896 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nl5pj"] Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.321862 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-content/0.log" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.331535 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-utilities\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.331845 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-catalog-content\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.331974 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmj9t\" (UniqueName: \"kubernetes.io/projected/326781b4-f58d-4e82-9a10-1e5186947b17-kube-api-access-cmj9t\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.358337 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-content/0.log" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.433635 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmj9t\" (UniqueName: \"kubernetes.io/projected/326781b4-f58d-4e82-9a10-1e5186947b17-kube-api-access-cmj9t\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.433769 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-utilities\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.433911 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-catalog-content\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.434384 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-catalog-content\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.434613 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-utilities\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.453156 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmj9t\" (UniqueName: \"kubernetes.io/projected/326781b4-f58d-4e82-9a10-1e5186947b17-kube-api-access-cmj9t\") pod \"community-operators-nl5pj\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.545098 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-utilities/0.log" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.630801 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:00 crc kubenswrapper[4799]: I0216 14:03:00.753457 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-content/0.log" Feb 16 14:03:01 crc kubenswrapper[4799]: I0216 14:03:01.190572 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nl5pj"] Feb 16 14:03:01 crc kubenswrapper[4799]: I0216 14:03:01.457337 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/registry-server/0.log" Feb 16 14:03:01 crc kubenswrapper[4799]: I0216 14:03:01.701809 4799 generic.go:334] "Generic (PLEG): container finished" podID="326781b4-f58d-4e82-9a10-1e5186947b17" containerID="de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e" exitCode=0 Feb 16 14:03:01 crc kubenswrapper[4799]: I0216 14:03:01.701999 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nl5pj" event={"ID":"326781b4-f58d-4e82-9a10-1e5186947b17","Type":"ContainerDied","Data":"de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e"} Feb 16 14:03:01 crc kubenswrapper[4799]: I0216 14:03:01.702089 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nl5pj" event={"ID":"326781b4-f58d-4e82-9a10-1e5186947b17","Type":"ContainerStarted","Data":"00bd88a3a722b281e1290bbd1ef7176abe7474c847bb2f2d56496a32468cea42"} Feb 16 14:03:03 crc kubenswrapper[4799]: I0216 14:03:03.722455 4799 generic.go:334] "Generic (PLEG): container finished" podID="326781b4-f58d-4e82-9a10-1e5186947b17" containerID="d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775" exitCode=0 Feb 16 14:03:03 crc kubenswrapper[4799]: I0216 14:03:03.722530 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nl5pj" event={"ID":"326781b4-f58d-4e82-9a10-1e5186947b17","Type":"ContainerDied","Data":"d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775"} Feb 16 14:03:04 crc kubenswrapper[4799]: I0216 14:03:04.731963 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nl5pj" event={"ID":"326781b4-f58d-4e82-9a10-1e5186947b17","Type":"ContainerStarted","Data":"40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3"} Feb 16 14:03:10 crc kubenswrapper[4799]: I0216 14:03:10.631861 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:10 crc kubenswrapper[4799]: I0216 14:03:10.632831 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:10 crc kubenswrapper[4799]: I0216 14:03:10.675625 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:10 crc kubenswrapper[4799]: I0216 14:03:10.707235 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nl5pj" podStartSLOduration=8.288947832 podStartE2EDuration="10.707213571s" podCreationTimestamp="2026-02-16 14:03:00 +0000 UTC" firstStartedPulling="2026-02-16 14:03:01.704210222 +0000 UTC m=+5487.297225556" lastFinishedPulling="2026-02-16 14:03:04.122475961 +0000 UTC m=+5489.715491295" observedRunningTime="2026-02-16 14:03:04.750260848 +0000 UTC m=+5490.343276182" watchObservedRunningTime="2026-02-16 14:03:10.707213571 +0000 UTC m=+5496.300228915" Feb 16 14:03:10 crc kubenswrapper[4799]: I0216 14:03:10.836792 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:10 crc kubenswrapper[4799]: I0216 14:03:10.910166 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nl5pj"] Feb 16 14:03:12 crc kubenswrapper[4799]: I0216 14:03:12.803484 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nl5pj" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="registry-server" containerID="cri-o://40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3" gracePeriod=2 Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.332871 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.439829 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmj9t\" (UniqueName: \"kubernetes.io/projected/326781b4-f58d-4e82-9a10-1e5186947b17-kube-api-access-cmj9t\") pod \"326781b4-f58d-4e82-9a10-1e5186947b17\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.439896 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-catalog-content\") pod \"326781b4-f58d-4e82-9a10-1e5186947b17\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.439988 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-utilities\") pod \"326781b4-f58d-4e82-9a10-1e5186947b17\" (UID: \"326781b4-f58d-4e82-9a10-1e5186947b17\") " Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.441653 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-utilities" (OuterVolumeSpecName: "utilities") pod "326781b4-f58d-4e82-9a10-1e5186947b17" (UID: "326781b4-f58d-4e82-9a10-1e5186947b17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.453279 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326781b4-f58d-4e82-9a10-1e5186947b17-kube-api-access-cmj9t" (OuterVolumeSpecName: "kube-api-access-cmj9t") pod "326781b4-f58d-4e82-9a10-1e5186947b17" (UID: "326781b4-f58d-4e82-9a10-1e5186947b17"). InnerVolumeSpecName "kube-api-access-cmj9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.539471 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "326781b4-f58d-4e82-9a10-1e5186947b17" (UID: "326781b4-f58d-4e82-9a10-1e5186947b17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.542249 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmj9t\" (UniqueName: \"kubernetes.io/projected/326781b4-f58d-4e82-9a10-1e5186947b17-kube-api-access-cmj9t\") on node \"crc\" DevicePath \"\"" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.542291 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.542304 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326781b4-f58d-4e82-9a10-1e5186947b17-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.814687 4799 generic.go:334] "Generic (PLEG): container finished" podID="326781b4-f58d-4e82-9a10-1e5186947b17" containerID="40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3" exitCode=0 Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.814737 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nl5pj" event={"ID":"326781b4-f58d-4e82-9a10-1e5186947b17","Type":"ContainerDied","Data":"40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3"} Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.814777 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nl5pj" event={"ID":"326781b4-f58d-4e82-9a10-1e5186947b17","Type":"ContainerDied","Data":"00bd88a3a722b281e1290bbd1ef7176abe7474c847bb2f2d56496a32468cea42"} Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.814800 4799 scope.go:117] "RemoveContainer" containerID="40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.814973 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nl5pj" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.843964 4799 scope.go:117] "RemoveContainer" containerID="d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.860336 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nl5pj"] Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.885379 4799 scope.go:117] "RemoveContainer" containerID="de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.885907 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nl5pj"] Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.919934 4799 scope.go:117] "RemoveContainer" containerID="40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3" Feb 16 14:03:13 crc kubenswrapper[4799]: E0216 14:03:13.920691 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3\": container with ID starting with 40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3 not found: ID does not exist" containerID="40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.920738 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3"} err="failed to get container status \"40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3\": rpc error: code = NotFound desc = could not find container \"40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3\": container with ID starting with 40150c92907cd5e5965a9cb2e7e5c9cacf7d8dc9c8a9ecac5009fcb4863cc8c3 not found: ID does not exist" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.920770 4799 scope.go:117] "RemoveContainer" containerID="d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775" Feb 16 14:03:13 crc kubenswrapper[4799]: E0216 14:03:13.921318 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775\": container with ID starting with d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775 not found: ID does not exist" containerID="d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.921354 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775"} err="failed to get container status \"d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775\": rpc error: code = NotFound desc = could not find container \"d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775\": container with ID starting with d2c0fbd0e47f253915a7803572c0d1ab6ffe01c44fa0bb84f9f1413a29b84775 not found: ID does not exist" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.921376 4799 scope.go:117] "RemoveContainer" containerID="de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e" Feb 16 14:03:13 crc kubenswrapper[4799]: E0216 14:03:13.924482 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e\": container with ID starting with de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e not found: ID does not exist" containerID="de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e" Feb 16 14:03:13 crc kubenswrapper[4799]: I0216 14:03:13.924527 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e"} err="failed to get container status \"de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e\": rpc error: code = NotFound desc = could not find container \"de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e\": container with ID starting with de7213e337c9f7b47c0a39c781faaeadcd875d0d409fb5195c6ddf06ae98cd7e not found: ID does not exist" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.158042 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:03:15 crc kubenswrapper[4799]: E0216 14:03:15.158616 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.160153 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" path="/var/lib/kubelet/pods/326781b4-f58d-4e82-9a10-1e5186947b17/volumes" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.538609 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-l48qr_ac6a624e-f6f1-44b4-b236-99307dfc75b3/prometheus-operator/0.log" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.684777 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8_25240a98-4447-4af0-89d7-8868fed65af8/prometheus-operator-admission-webhook/0.log" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.714810 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr_956b64fb-674a-40a6-be9b-b249d5b03aab/prometheus-operator-admission-webhook/0.log" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.778853 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9kr64_1f31c8ae-d209-4bed-8ed7-f568f713bd15/operator/0.log" Feb 16 14:03:15 crc kubenswrapper[4799]: I0216 14:03:15.875184 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fp4wv_ae279f38-d065-46a1-adb4-671588c18906/perses-operator/0.log" Feb 16 14:03:30 crc kubenswrapper[4799]: I0216 14:03:30.148920 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:03:30 crc kubenswrapper[4799]: E0216 14:03:30.149632 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:03:43 crc kubenswrapper[4799]: I0216 14:03:43.150753 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:03:43 crc kubenswrapper[4799]: E0216 14:03:43.152056 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:03:58 crc kubenswrapper[4799]: I0216 14:03:58.150452 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:03:58 crc kubenswrapper[4799]: E0216 14:03:58.151409 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:04:10 crc kubenswrapper[4799]: I0216 14:04:10.149330 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:04:10 crc kubenswrapper[4799]: E0216 14:04:10.150187 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:04:21 crc kubenswrapper[4799]: I0216 14:04:21.150196 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:04:21 crc kubenswrapper[4799]: E0216 14:04:21.151404 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:04:35 crc kubenswrapper[4799]: I0216 14:04:35.166749 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:04:35 crc kubenswrapper[4799]: E0216 14:04:35.168395 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:04:49 crc kubenswrapper[4799]: I0216 14:04:49.149599 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:04:49 crc kubenswrapper[4799]: E0216 14:04:49.151192 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:05:00 crc kubenswrapper[4799]: I0216 14:05:00.149152 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:05:00 crc kubenswrapper[4799]: I0216 14:05:00.923425 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"f61a0149fc9439a26bb072a85fd3086e36ae51fb1d0c2377e8f6f1853e70763f"} Feb 16 14:05:18 crc kubenswrapper[4799]: I0216 14:05:18.178059 4799 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerID="bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561" exitCode=0 Feb 16 14:05:18 crc kubenswrapper[4799]: I0216 14:05:18.178174 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h5p99/must-gather-vvkcn" event={"ID":"e7f6ad70-d861-46e3-a282-d134389f05fb","Type":"ContainerDied","Data":"bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561"} Feb 16 14:05:18 crc kubenswrapper[4799]: I0216 14:05:18.179435 4799 scope.go:117] "RemoveContainer" containerID="bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561" Feb 16 14:05:19 crc kubenswrapper[4799]: I0216 14:05:19.255802 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h5p99_must-gather-vvkcn_e7f6ad70-d861-46e3-a282-d134389f05fb/gather/0.log" Feb 16 14:05:28 crc kubenswrapper[4799]: I0216 14:05:28.558039 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h5p99/must-gather-vvkcn"] Feb 16 14:05:28 crc kubenswrapper[4799]: I0216 14:05:28.558936 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-h5p99/must-gather-vvkcn" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="copy" containerID="cri-o://1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7" gracePeriod=2 Feb 16 14:05:28 crc kubenswrapper[4799]: I0216 14:05:28.574406 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h5p99/must-gather-vvkcn"] Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.065542 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h5p99_must-gather-vvkcn_e7f6ad70-d861-46e3-a282-d134389f05fb/copy/0.log" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.066503 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.174260 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7f6ad70-d861-46e3-a282-d134389f05fb-must-gather-output\") pod \"e7f6ad70-d861-46e3-a282-d134389f05fb\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.174579 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2kx\" (UniqueName: \"kubernetes.io/projected/e7f6ad70-d861-46e3-a282-d134389f05fb-kube-api-access-9x2kx\") pod \"e7f6ad70-d861-46e3-a282-d134389f05fb\" (UID: \"e7f6ad70-d861-46e3-a282-d134389f05fb\") " Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.180939 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f6ad70-d861-46e3-a282-d134389f05fb-kube-api-access-9x2kx" (OuterVolumeSpecName: "kube-api-access-9x2kx") pod "e7f6ad70-d861-46e3-a282-d134389f05fb" (UID: "e7f6ad70-d861-46e3-a282-d134389f05fb"). InnerVolumeSpecName "kube-api-access-9x2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.277477 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2kx\" (UniqueName: \"kubernetes.io/projected/e7f6ad70-d861-46e3-a282-d134389f05fb-kube-api-access-9x2kx\") on node \"crc\" DevicePath \"\"" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.314432 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h5p99_must-gather-vvkcn_e7f6ad70-d861-46e3-a282-d134389f05fb/copy/0.log" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.314872 4799 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerID="1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7" exitCode=143 Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.314928 4799 scope.go:117] "RemoveContainer" containerID="1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.315075 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h5p99/must-gather-vvkcn" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.341718 4799 scope.go:117] "RemoveContainer" containerID="bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.374159 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f6ad70-d861-46e3-a282-d134389f05fb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e7f6ad70-d861-46e3-a282-d134389f05fb" (UID: "e7f6ad70-d861-46e3-a282-d134389f05fb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.379479 4799 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7f6ad70-d861-46e3-a282-d134389f05fb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.435390 4799 scope.go:117] "RemoveContainer" containerID="1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7" Feb 16 14:05:29 crc kubenswrapper[4799]: E0216 14:05:29.436076 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7\": container with ID starting with 1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7 not found: ID does not exist" containerID="1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.436107 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7"} err="failed to get container status \"1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7\": rpc error: code = NotFound desc = could not find container \"1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7\": container with ID starting with 1cf3c4aab1ce128bfe504b87705af861bad90fb54e23e02e4a0bfc5c014fc6b7 not found: ID does not exist" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.436166 4799 scope.go:117] "RemoveContainer" containerID="bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561" Feb 16 14:05:29 crc kubenswrapper[4799]: E0216 14:05:29.436548 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561\": container with ID starting with bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561 not found: ID does not exist" containerID="bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561" Feb 16 14:05:29 crc kubenswrapper[4799]: I0216 14:05:29.436586 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561"} err="failed to get container status \"bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561\": rpc error: code = NotFound desc = could not find container \"bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561\": container with ID starting with bb5e0bef8057c69ff123cde4827cb26e7712f349451d8d9ecd720da068838561 not found: ID does not exist" Feb 16 14:05:31 crc kubenswrapper[4799]: I0216 14:05:31.165525 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" path="/var/lib/kubelet/pods/e7f6ad70-d861-46e3-a282-d134389f05fb/volumes" Feb 16 14:05:45 crc kubenswrapper[4799]: I0216 14:05:45.396956 4799 scope.go:117] "RemoveContainer" containerID="65252dafa5da9f7362cfcc89396b3f4bd498fbaff9712b56f4a66459665157d8" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.094162 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-26x5q"] Feb 16 14:06:30 crc kubenswrapper[4799]: E0216 14:06:30.095695 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="extract-content" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.095728 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="extract-content" Feb 16 14:06:30 crc kubenswrapper[4799]: E0216 14:06:30.095786 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="copy" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.095802 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="copy" Feb 16 14:06:30 crc kubenswrapper[4799]: E0216 14:06:30.095841 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="extract-utilities" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.095893 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="extract-utilities" Feb 16 14:06:30 crc kubenswrapper[4799]: E0216 14:06:30.095926 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="gather" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.095943 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="gather" Feb 16 14:06:30 crc kubenswrapper[4799]: E0216 14:06:30.095969 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="registry-server" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.095981 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="registry-server" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.096417 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="copy" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.096471 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f6ad70-d861-46e3-a282-d134389f05fb" containerName="gather" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.096500 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="326781b4-f58d-4e82-9a10-1e5186947b17" containerName="registry-server" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.101833 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.132371 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26x5q"] Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.206489 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-catalog-content\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.206559 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-utilities\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.206615 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ss2\" (UniqueName: \"kubernetes.io/projected/6a871934-36ee-486b-8764-0de3e2a3946d-kube-api-access-f5ss2\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.308560 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-catalog-content\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.308629 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-utilities\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.308687 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5ss2\" (UniqueName: \"kubernetes.io/projected/6a871934-36ee-486b-8764-0de3e2a3946d-kube-api-access-f5ss2\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.309588 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-catalog-content\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.309856 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-utilities\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.332111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5ss2\" (UniqueName: \"kubernetes.io/projected/6a871934-36ee-486b-8764-0de3e2a3946d-kube-api-access-f5ss2\") pod \"redhat-operators-26x5q\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.430185 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:30 crc kubenswrapper[4799]: I0216 14:06:30.968695 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26x5q"] Feb 16 14:06:31 crc kubenswrapper[4799]: I0216 14:06:31.006326 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerStarted","Data":"54f9a9c5c409f010f8f160e2f9a8d29977c783f86c8a2639b4d15d8c4098a0e0"} Feb 16 14:06:32 crc kubenswrapper[4799]: I0216 14:06:32.023532 4799 generic.go:334] "Generic (PLEG): container finished" podID="6a871934-36ee-486b-8764-0de3e2a3946d" containerID="d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6" exitCode=0 Feb 16 14:06:32 crc kubenswrapper[4799]: I0216 14:06:32.023661 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerDied","Data":"d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6"} Feb 16 14:06:32 crc kubenswrapper[4799]: I0216 14:06:32.029700 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:06:33 crc kubenswrapper[4799]: I0216 14:06:33.036880 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerStarted","Data":"b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564"} Feb 16 14:06:37 crc kubenswrapper[4799]: I0216 14:06:37.100413 4799 generic.go:334] "Generic (PLEG): container finished" podID="6a871934-36ee-486b-8764-0de3e2a3946d" containerID="b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564" exitCode=0 Feb 16 14:06:37 crc kubenswrapper[4799]: I0216 14:06:37.101013 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerDied","Data":"b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564"} Feb 16 14:06:38 crc kubenswrapper[4799]: I0216 14:06:38.115824 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerStarted","Data":"a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a"} Feb 16 14:06:38 crc kubenswrapper[4799]: I0216 14:06:38.139291 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-26x5q" podStartSLOduration=2.628708664 podStartE2EDuration="8.139232019s" podCreationTimestamp="2026-02-16 14:06:30 +0000 UTC" firstStartedPulling="2026-02-16 14:06:32.029255569 +0000 UTC m=+5697.622270933" lastFinishedPulling="2026-02-16 14:06:37.539778924 +0000 UTC m=+5703.132794288" observedRunningTime="2026-02-16 14:06:38.135004529 +0000 UTC m=+5703.728019883" watchObservedRunningTime="2026-02-16 14:06:38.139232019 +0000 UTC m=+5703.732247363" Feb 16 14:06:40 crc kubenswrapper[4799]: I0216 14:06:40.431295 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:40 crc kubenswrapper[4799]: I0216 14:06:40.431813 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:41 crc kubenswrapper[4799]: I0216 14:06:41.492188 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-26x5q" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="registry-server" probeResult="failure" output=< Feb 16 14:06:41 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 14:06:41 crc kubenswrapper[4799]: > Feb 16 14:06:45 crc kubenswrapper[4799]: I0216 14:06:45.521088 4799 scope.go:117] "RemoveContainer" containerID="c2a6c575e0266f657e410d6c97e83cc93be7b123ba4cf13cd76b595c6f8f6000" Feb 16 14:06:50 crc kubenswrapper[4799]: I0216 14:06:50.484114 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:50 crc kubenswrapper[4799]: I0216 14:06:50.538246 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:50 crc kubenswrapper[4799]: I0216 14:06:50.738248 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-26x5q"] Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.285930 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-26x5q" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="registry-server" containerID="cri-o://a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a" gracePeriod=2 Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.781763 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.852924 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5ss2\" (UniqueName: \"kubernetes.io/projected/6a871934-36ee-486b-8764-0de3e2a3946d-kube-api-access-f5ss2\") pod \"6a871934-36ee-486b-8764-0de3e2a3946d\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.852976 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-catalog-content\") pod \"6a871934-36ee-486b-8764-0de3e2a3946d\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.853106 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-utilities\") pod \"6a871934-36ee-486b-8764-0de3e2a3946d\" (UID: \"6a871934-36ee-486b-8764-0de3e2a3946d\") " Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.854358 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-utilities" (OuterVolumeSpecName: "utilities") pod "6a871934-36ee-486b-8764-0de3e2a3946d" (UID: "6a871934-36ee-486b-8764-0de3e2a3946d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.861449 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a871934-36ee-486b-8764-0de3e2a3946d-kube-api-access-f5ss2" (OuterVolumeSpecName: "kube-api-access-f5ss2") pod "6a871934-36ee-486b-8764-0de3e2a3946d" (UID: "6a871934-36ee-486b-8764-0de3e2a3946d"). InnerVolumeSpecName "kube-api-access-f5ss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.955721 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5ss2\" (UniqueName: \"kubernetes.io/projected/6a871934-36ee-486b-8764-0de3e2a3946d-kube-api-access-f5ss2\") on node \"crc\" DevicePath \"\"" Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.955756 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:06:52 crc kubenswrapper[4799]: I0216 14:06:52.982647 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a871934-36ee-486b-8764-0de3e2a3946d" (UID: "6a871934-36ee-486b-8764-0de3e2a3946d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.058235 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a871934-36ee-486b-8764-0de3e2a3946d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.299806 4799 generic.go:334] "Generic (PLEG): container finished" podID="6a871934-36ee-486b-8764-0de3e2a3946d" containerID="a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a" exitCode=0 Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.299901 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerDied","Data":"a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a"} Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.299977 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26x5q" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.300366 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26x5q" event={"ID":"6a871934-36ee-486b-8764-0de3e2a3946d","Type":"ContainerDied","Data":"54f9a9c5c409f010f8f160e2f9a8d29977c783f86c8a2639b4d15d8c4098a0e0"} Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.300417 4799 scope.go:117] "RemoveContainer" containerID="a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.328484 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-26x5q"] Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.335538 4799 scope.go:117] "RemoveContainer" containerID="b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.339802 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-26x5q"] Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.387952 4799 scope.go:117] "RemoveContainer" containerID="d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.426150 4799 scope.go:117] "RemoveContainer" containerID="a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a" Feb 16 14:06:53 crc kubenswrapper[4799]: E0216 14:06:53.426766 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a\": container with ID starting with a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a not found: ID does not exist" containerID="a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.426849 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a"} err="failed to get container status \"a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a\": rpc error: code = NotFound desc = could not find container \"a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a\": container with ID starting with a74a022a8339112af8e6ff604f785b21ef4b100e9b628cf50433067092a8745a not found: ID does not exist" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.426891 4799 scope.go:117] "RemoveContainer" containerID="b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564" Feb 16 14:06:53 crc kubenswrapper[4799]: E0216 14:06:53.427517 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564\": container with ID starting with b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564 not found: ID does not exist" containerID="b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.427565 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564"} err="failed to get container status \"b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564\": rpc error: code = NotFound desc = could not find container \"b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564\": container with ID starting with b4cc2f292df81b76b25b2ab10044143c1c0cf39a4399297029061f2ca779d564 not found: ID does not exist" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.427592 4799 scope.go:117] "RemoveContainer" containerID="d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6" Feb 16 14:06:53 crc kubenswrapper[4799]: E0216 14:06:53.428105 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6\": container with ID starting with d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6 not found: ID does not exist" containerID="d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6" Feb 16 14:06:53 crc kubenswrapper[4799]: I0216 14:06:53.428168 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6"} err="failed to get container status \"d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6\": rpc error: code = NotFound desc = could not find container \"d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6\": container with ID starting with d084f1f1a7e73488aab2409de2302976b16ceedb48b9c9f31e10a49a1f0033f6 not found: ID does not exist" Feb 16 14:06:55 crc kubenswrapper[4799]: I0216 14:06:55.173275 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" path="/var/lib/kubelet/pods/6a871934-36ee-486b-8764-0de3e2a3946d/volumes" Feb 16 14:07:21 crc kubenswrapper[4799]: I0216 14:07:21.793648 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:07:21 crc kubenswrapper[4799]: I0216 14:07:21.794375 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:07:51 crc kubenswrapper[4799]: I0216 14:07:51.793207 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:07:51 crc kubenswrapper[4799]: I0216 14:07:51.793789 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:08:21 crc kubenswrapper[4799]: I0216 14:08:21.792691 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:08:21 crc kubenswrapper[4799]: I0216 14:08:21.793268 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:08:21 crc kubenswrapper[4799]: I0216 14:08:21.793325 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 14:08:21 crc kubenswrapper[4799]: I0216 14:08:21.794270 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f61a0149fc9439a26bb072a85fd3086e36ae51fb1d0c2377e8f6f1853e70763f"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:08:21 crc kubenswrapper[4799]: I0216 14:08:21.794318 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://f61a0149fc9439a26bb072a85fd3086e36ae51fb1d0c2377e8f6f1853e70763f" gracePeriod=600 Feb 16 14:08:22 crc kubenswrapper[4799]: I0216 14:08:22.337422 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="f61a0149fc9439a26bb072a85fd3086e36ae51fb1d0c2377e8f6f1853e70763f" exitCode=0 Feb 16 14:08:22 crc kubenswrapper[4799]: I0216 14:08:22.337525 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"f61a0149fc9439a26bb072a85fd3086e36ae51fb1d0c2377e8f6f1853e70763f"} Feb 16 14:08:22 crc kubenswrapper[4799]: I0216 14:08:22.337756 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724"} Feb 16 14:08:22 crc kubenswrapper[4799]: I0216 14:08:22.337774 4799 scope.go:117] "RemoveContainer" containerID="f8064cd4f0f52ca684cc9ad5e5ad5ced9080e0dd75311ad391077f089f947a94" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.355461 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lppcm/must-gather-m8xqh"] Feb 16 14:08:58 crc kubenswrapper[4799]: E0216 14:08:58.356641 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="extract-content" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.356667 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="extract-content" Feb 16 14:08:58 crc kubenswrapper[4799]: E0216 14:08:58.356719 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="registry-server" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.356731 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="registry-server" Feb 16 14:08:58 crc kubenswrapper[4799]: E0216 14:08:58.356743 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="extract-utilities" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.356754 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="extract-utilities" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.357125 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a871934-36ee-486b-8764-0de3e2a3946d" containerName="registry-server" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.365071 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.370565 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lppcm"/"openshift-service-ca.crt" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.370643 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lppcm"/"kube-root-ca.crt" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.395567 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lppcm/must-gather-m8xqh"] Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.477807 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9hg\" (UniqueName: \"kubernetes.io/projected/17b52a93-fe09-496f-b253-1e84f1cbf8af-kube-api-access-5d9hg\") pod \"must-gather-m8xqh\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.478361 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17b52a93-fe09-496f-b253-1e84f1cbf8af-must-gather-output\") pod \"must-gather-m8xqh\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.579950 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9hg\" (UniqueName: \"kubernetes.io/projected/17b52a93-fe09-496f-b253-1e84f1cbf8af-kube-api-access-5d9hg\") pod \"must-gather-m8xqh\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.580165 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17b52a93-fe09-496f-b253-1e84f1cbf8af-must-gather-output\") pod \"must-gather-m8xqh\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.580591 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17b52a93-fe09-496f-b253-1e84f1cbf8af-must-gather-output\") pod \"must-gather-m8xqh\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.605771 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9hg\" (UniqueName: \"kubernetes.io/projected/17b52a93-fe09-496f-b253-1e84f1cbf8af-kube-api-access-5d9hg\") pod \"must-gather-m8xqh\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:58 crc kubenswrapper[4799]: I0216 14:08:58.694061 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:08:59 crc kubenswrapper[4799]: I0216 14:08:59.217443 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lppcm/must-gather-m8xqh"] Feb 16 14:08:59 crc kubenswrapper[4799]: I0216 14:08:59.775472 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/must-gather-m8xqh" event={"ID":"17b52a93-fe09-496f-b253-1e84f1cbf8af","Type":"ContainerStarted","Data":"4bd5f9234ba24cbd4f3c3cda9364b355a93c267f66ff15da2bb29c3e088db509"} Feb 16 14:08:59 crc kubenswrapper[4799]: I0216 14:08:59.775850 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/must-gather-m8xqh" event={"ID":"17b52a93-fe09-496f-b253-1e84f1cbf8af","Type":"ContainerStarted","Data":"4f078e4cf11415feab6e5a4cb386eda1102186c5f564e063d2b6844a567b6220"} Feb 16 14:08:59 crc kubenswrapper[4799]: I0216 14:08:59.775867 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/must-gather-m8xqh" event={"ID":"17b52a93-fe09-496f-b253-1e84f1cbf8af","Type":"ContainerStarted","Data":"931d4914b709dc3f6895c0e9bf40937bcde83e23b9632437bec3e42448460530"} Feb 16 14:08:59 crc kubenswrapper[4799]: I0216 14:08:59.796026 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lppcm/must-gather-m8xqh" podStartSLOduration=1.7960065379999999 podStartE2EDuration="1.796006538s" podCreationTimestamp="2026-02-16 14:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:08:59.789950296 +0000 UTC m=+5845.382965630" watchObservedRunningTime="2026-02-16 14:08:59.796006538 +0000 UTC m=+5845.389021872" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.666511 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lppcm/crc-debug-hfms5"] Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.668265 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.671810 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lppcm"/"default-dockercfg-lxj5r" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.707352 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-host\") pod \"crc-debug-hfms5\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.707839 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7s8\" (UniqueName: \"kubernetes.io/projected/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-kube-api-access-mm7s8\") pod \"crc-debug-hfms5\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.809419 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7s8\" (UniqueName: \"kubernetes.io/projected/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-kube-api-access-mm7s8\") pod \"crc-debug-hfms5\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.809496 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-host\") pod \"crc-debug-hfms5\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.809647 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-host\") pod \"crc-debug-hfms5\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.850364 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7s8\" (UniqueName: \"kubernetes.io/projected/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-kube-api-access-mm7s8\") pod \"crc-debug-hfms5\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:03 crc kubenswrapper[4799]: I0216 14:09:03.991307 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:04 crc kubenswrapper[4799]: I0216 14:09:04.838791 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-hfms5" event={"ID":"d576f2af-5f62-43eb-826c-6f1e9b4bb68d","Type":"ContainerStarted","Data":"b31fedde167787f41054d5352710e56f41696f576aed09edb787881c1060862a"} Feb 16 14:09:04 crc kubenswrapper[4799]: I0216 14:09:04.839483 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-hfms5" event={"ID":"d576f2af-5f62-43eb-826c-6f1e9b4bb68d","Type":"ContainerStarted","Data":"897299e7365208eb552df1db29f1da460735bb5d3a66b8f4310731fbb47a835b"} Feb 16 14:09:04 crc kubenswrapper[4799]: I0216 14:09:04.871640 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lppcm/crc-debug-hfms5" podStartSLOduration=1.8716179849999999 podStartE2EDuration="1.871617985s" podCreationTimestamp="2026-02-16 14:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:09:04.869257308 +0000 UTC m=+5850.462272652" watchObservedRunningTime="2026-02-16 14:09:04.871617985 +0000 UTC m=+5850.464633329" Feb 16 14:09:41 crc kubenswrapper[4799]: I0216 14:09:41.181587 4799 generic.go:334] "Generic (PLEG): container finished" podID="d576f2af-5f62-43eb-826c-6f1e9b4bb68d" containerID="b31fedde167787f41054d5352710e56f41696f576aed09edb787881c1060862a" exitCode=0 Feb 16 14:09:41 crc kubenswrapper[4799]: I0216 14:09:41.181682 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-hfms5" event={"ID":"d576f2af-5f62-43eb-826c-6f1e9b4bb68d","Type":"ContainerDied","Data":"b31fedde167787f41054d5352710e56f41696f576aed09edb787881c1060862a"} Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.325644 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.371983 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lppcm/crc-debug-hfms5"] Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.380393 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lppcm/crc-debug-hfms5"] Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.526795 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm7s8\" (UniqueName: \"kubernetes.io/projected/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-kube-api-access-mm7s8\") pod \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.526900 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-host\") pod \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\" (UID: \"d576f2af-5f62-43eb-826c-6f1e9b4bb68d\") " Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.527191 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-host" (OuterVolumeSpecName: "host") pod "d576f2af-5f62-43eb-826c-6f1e9b4bb68d" (UID: "d576f2af-5f62-43eb-826c-6f1e9b4bb68d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.527684 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-host\") on node \"crc\" DevicePath \"\"" Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.541924 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-kube-api-access-mm7s8" (OuterVolumeSpecName: "kube-api-access-mm7s8") pod "d576f2af-5f62-43eb-826c-6f1e9b4bb68d" (UID: "d576f2af-5f62-43eb-826c-6f1e9b4bb68d"). InnerVolumeSpecName "kube-api-access-mm7s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:09:42 crc kubenswrapper[4799]: I0216 14:09:42.629248 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm7s8\" (UniqueName: \"kubernetes.io/projected/d576f2af-5f62-43eb-826c-6f1e9b4bb68d-kube-api-access-mm7s8\") on node \"crc\" DevicePath \"\"" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.161534 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d576f2af-5f62-43eb-826c-6f1e9b4bb68d" path="/var/lib/kubelet/pods/d576f2af-5f62-43eb-826c-6f1e9b4bb68d/volumes" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.201383 4799 scope.go:117] "RemoveContainer" containerID="b31fedde167787f41054d5352710e56f41696f576aed09edb787881c1060862a" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.201511 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-hfms5" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.553967 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lppcm/crc-debug-8szdx"] Feb 16 14:09:43 crc kubenswrapper[4799]: E0216 14:09:43.554403 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d576f2af-5f62-43eb-826c-6f1e9b4bb68d" containerName="container-00" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.554416 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="d576f2af-5f62-43eb-826c-6f1e9b4bb68d" containerName="container-00" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.554610 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="d576f2af-5f62-43eb-826c-6f1e9b4bb68d" containerName="container-00" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.555473 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.558169 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lppcm"/"default-dockercfg-lxj5r" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.652097 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0117eab2-ace3-4faf-a19b-856986fbdac7-host\") pod \"crc-debug-8szdx\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.652376 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlsml\" (UniqueName: \"kubernetes.io/projected/0117eab2-ace3-4faf-a19b-856986fbdac7-kube-api-access-qlsml\") pod \"crc-debug-8szdx\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.755438 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0117eab2-ace3-4faf-a19b-856986fbdac7-host\") pod \"crc-debug-8szdx\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.755659 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0117eab2-ace3-4faf-a19b-856986fbdac7-host\") pod \"crc-debug-8szdx\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.755928 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlsml\" (UniqueName: \"kubernetes.io/projected/0117eab2-ace3-4faf-a19b-856986fbdac7-kube-api-access-qlsml\") pod \"crc-debug-8szdx\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.787104 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlsml\" (UniqueName: \"kubernetes.io/projected/0117eab2-ace3-4faf-a19b-856986fbdac7-kube-api-access-qlsml\") pod \"crc-debug-8szdx\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:43 crc kubenswrapper[4799]: I0216 14:09:43.875307 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:44 crc kubenswrapper[4799]: I0216 14:09:44.213817 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-8szdx" event={"ID":"0117eab2-ace3-4faf-a19b-856986fbdac7","Type":"ContainerStarted","Data":"dfd67f83c5e460c236c2ea4bca4b42ad8ad76cf05581ffc7695b813d05eceb31"} Feb 16 14:09:44 crc kubenswrapper[4799]: I0216 14:09:44.214208 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-8szdx" event={"ID":"0117eab2-ace3-4faf-a19b-856986fbdac7","Type":"ContainerStarted","Data":"dc89075feb8484ed3208980f54f4f1a8b392fc94980e6d53dbc3c68cf83f1186"} Feb 16 14:09:45 crc kubenswrapper[4799]: I0216 14:09:45.225814 4799 generic.go:334] "Generic (PLEG): container finished" podID="0117eab2-ace3-4faf-a19b-856986fbdac7" containerID="dfd67f83c5e460c236c2ea4bca4b42ad8ad76cf05581ffc7695b813d05eceb31" exitCode=0 Feb 16 14:09:45 crc kubenswrapper[4799]: I0216 14:09:45.226700 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-8szdx" event={"ID":"0117eab2-ace3-4faf-a19b-856986fbdac7","Type":"ContainerDied","Data":"dfd67f83c5e460c236c2ea4bca4b42ad8ad76cf05581ffc7695b813d05eceb31"} Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.350212 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.516890 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lppcm/crc-debug-8szdx"] Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.524903 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lppcm/crc-debug-8szdx"] Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.547180 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlsml\" (UniqueName: \"kubernetes.io/projected/0117eab2-ace3-4faf-a19b-856986fbdac7-kube-api-access-qlsml\") pod \"0117eab2-ace3-4faf-a19b-856986fbdac7\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.547252 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0117eab2-ace3-4faf-a19b-856986fbdac7-host\") pod \"0117eab2-ace3-4faf-a19b-856986fbdac7\" (UID: \"0117eab2-ace3-4faf-a19b-856986fbdac7\") " Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.547422 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0117eab2-ace3-4faf-a19b-856986fbdac7-host" (OuterVolumeSpecName: "host") pod "0117eab2-ace3-4faf-a19b-856986fbdac7" (UID: "0117eab2-ace3-4faf-a19b-856986fbdac7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.548276 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0117eab2-ace3-4faf-a19b-856986fbdac7-host\") on node \"crc\" DevicePath \"\"" Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.554075 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0117eab2-ace3-4faf-a19b-856986fbdac7-kube-api-access-qlsml" (OuterVolumeSpecName: "kube-api-access-qlsml") pod "0117eab2-ace3-4faf-a19b-856986fbdac7" (UID: "0117eab2-ace3-4faf-a19b-856986fbdac7"). InnerVolumeSpecName "kube-api-access-qlsml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:09:46 crc kubenswrapper[4799]: I0216 14:09:46.649563 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlsml\" (UniqueName: \"kubernetes.io/projected/0117eab2-ace3-4faf-a19b-856986fbdac7-kube-api-access-qlsml\") on node \"crc\" DevicePath \"\"" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.162911 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0117eab2-ace3-4faf-a19b-856986fbdac7" path="/var/lib/kubelet/pods/0117eab2-ace3-4faf-a19b-856986fbdac7/volumes" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.243558 4799 scope.go:117] "RemoveContainer" containerID="dfd67f83c5e460c236c2ea4bca4b42ad8ad76cf05581ffc7695b813d05eceb31" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.243589 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-8szdx" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.729095 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lppcm/crc-debug-qfppg"] Feb 16 14:09:47 crc kubenswrapper[4799]: E0216 14:09:47.730177 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0117eab2-ace3-4faf-a19b-856986fbdac7" containerName="container-00" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.730194 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="0117eab2-ace3-4faf-a19b-856986fbdac7" containerName="container-00" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.730433 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="0117eab2-ace3-4faf-a19b-856986fbdac7" containerName="container-00" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.731450 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.734636 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lppcm"/"default-dockercfg-lxj5r" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.779278 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmhp\" (UniqueName: \"kubernetes.io/projected/717af918-3366-4638-b2ad-5735abfec78c-kube-api-access-9jmhp\") pod \"crc-debug-qfppg\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.779388 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/717af918-3366-4638-b2ad-5735abfec78c-host\") pod \"crc-debug-qfppg\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.881684 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jmhp\" (UniqueName: \"kubernetes.io/projected/717af918-3366-4638-b2ad-5735abfec78c-kube-api-access-9jmhp\") pod \"crc-debug-qfppg\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.881782 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/717af918-3366-4638-b2ad-5735abfec78c-host\") pod \"crc-debug-qfppg\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.881893 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/717af918-3366-4638-b2ad-5735abfec78c-host\") pod \"crc-debug-qfppg\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:47 crc kubenswrapper[4799]: I0216 14:09:47.904820 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jmhp\" (UniqueName: \"kubernetes.io/projected/717af918-3366-4638-b2ad-5735abfec78c-kube-api-access-9jmhp\") pod \"crc-debug-qfppg\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:48 crc kubenswrapper[4799]: I0216 14:09:48.058682 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:48 crc kubenswrapper[4799]: W0216 14:09:48.090115 4799 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod717af918_3366_4638_b2ad_5735abfec78c.slice/crio-16fb1abe0e4d1481da2299faf28547a3b406c61803c46cae06f8c457596a4536 WatchSource:0}: Error finding container 16fb1abe0e4d1481da2299faf28547a3b406c61803c46cae06f8c457596a4536: Status 404 returned error can't find the container with id 16fb1abe0e4d1481da2299faf28547a3b406c61803c46cae06f8c457596a4536 Feb 16 14:09:48 crc kubenswrapper[4799]: I0216 14:09:48.255388 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-qfppg" event={"ID":"717af918-3366-4638-b2ad-5735abfec78c","Type":"ContainerStarted","Data":"16fb1abe0e4d1481da2299faf28547a3b406c61803c46cae06f8c457596a4536"} Feb 16 14:09:49 crc kubenswrapper[4799]: I0216 14:09:49.266963 4799 generic.go:334] "Generic (PLEG): container finished" podID="717af918-3366-4638-b2ad-5735abfec78c" containerID="160f1d374290620830fcfdab1529786e9c8a1fbe658f85d70ee0cbc79731476a" exitCode=0 Feb 16 14:09:49 crc kubenswrapper[4799]: I0216 14:09:49.267012 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/crc-debug-qfppg" event={"ID":"717af918-3366-4638-b2ad-5735abfec78c","Type":"ContainerDied","Data":"160f1d374290620830fcfdab1529786e9c8a1fbe658f85d70ee0cbc79731476a"} Feb 16 14:09:49 crc kubenswrapper[4799]: I0216 14:09:49.306252 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lppcm/crc-debug-qfppg"] Feb 16 14:09:49 crc kubenswrapper[4799]: I0216 14:09:49.315240 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lppcm/crc-debug-qfppg"] Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.375059 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.432973 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/717af918-3366-4638-b2ad-5735abfec78c-host\") pod \"717af918-3366-4638-b2ad-5735abfec78c\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.433040 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jmhp\" (UniqueName: \"kubernetes.io/projected/717af918-3366-4638-b2ad-5735abfec78c-kube-api-access-9jmhp\") pod \"717af918-3366-4638-b2ad-5735abfec78c\" (UID: \"717af918-3366-4638-b2ad-5735abfec78c\") " Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.434152 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/717af918-3366-4638-b2ad-5735abfec78c-host" (OuterVolumeSpecName: "host") pod "717af918-3366-4638-b2ad-5735abfec78c" (UID: "717af918-3366-4638-b2ad-5735abfec78c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.442608 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717af918-3366-4638-b2ad-5735abfec78c-kube-api-access-9jmhp" (OuterVolumeSpecName: "kube-api-access-9jmhp") pod "717af918-3366-4638-b2ad-5735abfec78c" (UID: "717af918-3366-4638-b2ad-5735abfec78c"). InnerVolumeSpecName "kube-api-access-9jmhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.535783 4799 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/717af918-3366-4638-b2ad-5735abfec78c-host\") on node \"crc\" DevicePath \"\"" Feb 16 14:09:50 crc kubenswrapper[4799]: I0216 14:09:50.535814 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jmhp\" (UniqueName: \"kubernetes.io/projected/717af918-3366-4638-b2ad-5735abfec78c-kube-api-access-9jmhp\") on node \"crc\" DevicePath \"\"" Feb 16 14:09:51 crc kubenswrapper[4799]: I0216 14:09:51.163924 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717af918-3366-4638-b2ad-5735abfec78c" path="/var/lib/kubelet/pods/717af918-3366-4638-b2ad-5735abfec78c/volumes" Feb 16 14:09:51 crc kubenswrapper[4799]: I0216 14:09:51.284355 4799 scope.go:117] "RemoveContainer" containerID="160f1d374290620830fcfdab1529786e9c8a1fbe658f85d70ee0cbc79731476a" Feb 16 14:09:51 crc kubenswrapper[4799]: I0216 14:09:51.284364 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/crc-debug-qfppg" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.349339 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cdd7b58f8-6bxrn_b2510448-629c-43df-9492-a07c96a8b5f0/barbican-api/0.log" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.556754 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cdd7b58f8-6bxrn_b2510448-629c-43df-9492-a07c96a8b5f0/barbican-api-log/0.log" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.559905 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5584d58cd8-z4cwc_6cadefef-9278-4473-a8c8-97911ac9b269/barbican-keystone-listener/0.log" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.686712 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5584d58cd8-z4cwc_6cadefef-9278-4473-a8c8-97911ac9b269/barbican-keystone-listener-log/0.log" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.773097 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56d6b7fd5c-s6xhs_99699fe4-f20c-42e0-9c4f-029b9ee24fdb/barbican-worker/0.log" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.837053 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-56d6b7fd5c-s6xhs_99699fe4-f20c-42e0-9c4f-029b9ee24fdb/barbican-worker-log/0.log" Feb 16 14:10:38 crc kubenswrapper[4799]: I0216 14:10:38.967588 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bcsvs_4ea66d5c-7325-440d-816c-c02db1d1bf90/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:39 crc kubenswrapper[4799]: I0216 14:10:39.135322 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/ceilometer-central-agent/0.log" Feb 16 14:10:39 crc kubenswrapper[4799]: I0216 14:10:39.178931 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/ceilometer-notification-agent/0.log" Feb 16 14:10:39 crc kubenswrapper[4799]: I0216 14:10:39.209496 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/proxy-httpd/0.log" Feb 16 14:10:39 crc kubenswrapper[4799]: I0216 14:10:39.244945 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13a099ed-6620-4310-85c7-986b1a366a1b/sg-core/0.log" Feb 16 14:10:39 crc kubenswrapper[4799]: I0216 14:10:39.506758 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_15c3718e-7e67-4586-8532-6883f43129bd/cinder-api-log/0.log" Feb 16 14:10:39 crc kubenswrapper[4799]: I0216 14:10:39.832755 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ea67e1e3-d03f-49fa-a150-9ff09fca74ba/probe/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.146071 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0404faed-9e4d-4374-83ef-13dc13839e7b/cinder-scheduler/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.161663 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0404faed-9e4d-4374-83ef-13dc13839e7b/probe/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.189794 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ea67e1e3-d03f-49fa-a150-9ff09fca74ba/cinder-backup/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.282216 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_15c3718e-7e67-4586-8532-6883f43129bd/cinder-api/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.482588 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_64beb0d2-7a13-4a86-b4f8-8843611c254c/probe/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.543422 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_64beb0d2-7a13-4a86-b4f8-8843611c254c/cinder-volume/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.682719 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_5f3698ec-879f-4ead-8ac9-e08fa64c655e/probe/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.773345 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lw7hf_e8cd035a-4f87-419c-994a-1ab09e6da101/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:40 crc kubenswrapper[4799]: I0216 14:10:40.929988 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_5f3698ec-879f-4ead-8ac9-e08fa64c655e/cinder-volume/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.035471 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tp2v5_db459b41-b7ab-4982-8889-11233d549c9b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.122612 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76846d67df-2cl9g_77997ea7-755d-40ed-94d6-baab5bd86a9b/init/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.322592 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76846d67df-2cl9g_77997ea7-755d-40ed-94d6-baab5bd86a9b/init/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.404192 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4jwvz_ceaa23db-d28e-4d2f-bf84-7336146bfb41/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.493463 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76846d67df-2cl9g_77997ea7-755d-40ed-94d6-baab5bd86a9b/dnsmasq-dns/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.603448 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0241ff0c-3747-414a-b48e-72ac52d5836a/glance-httpd/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.624676 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0241ff0c-3747-414a-b48e-72ac52d5836a/glance-log/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.802609 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71e60503-bb2b-452d-a96a-ef5ec0745d94/glance-log/0.log" Feb 16 14:10:41 crc kubenswrapper[4799]: I0216 14:10:41.825710 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71e60503-bb2b-452d-a96a-ef5ec0745d94/glance-httpd/0.log" Feb 16 14:10:42 crc kubenswrapper[4799]: I0216 14:10:42.210542 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b64799464-xwrv9_aa66dcb2-43c2-4824-80f8-30911a4a8c72/horizon/0.log" Feb 16 14:10:42 crc kubenswrapper[4799]: I0216 14:10:42.285022 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8fk67_6ad5bcca-c29e-4594-8698-4a139a80eb92/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:42 crc kubenswrapper[4799]: I0216 14:10:42.556434 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-thrw7_ff2369e0-1189-4a8f-abca-c8db832a8e8c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:42 crc kubenswrapper[4799]: I0216 14:10:42.782334 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520781-vcnnm_2a2944ce-d43d-455d-81c0-21e082c4c544/keystone-cron/0.log" Feb 16 14:10:42 crc kubenswrapper[4799]: I0216 14:10:42.939196 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b64799464-xwrv9_aa66dcb2-43c2-4824-80f8-30911a4a8c72/horizon-log/0.log" Feb 16 14:10:42 crc kubenswrapper[4799]: I0216 14:10:42.997726 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520841-vcz9g_d909d2d9-21eb-4176-9378-dbba67a87b93/keystone-cron/0.log" Feb 16 14:10:43 crc kubenswrapper[4799]: I0216 14:10:43.075937 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_11134cac-9930-424d-8a67-69a6ba98ff21/kube-state-metrics/0.log" Feb 16 14:10:43 crc kubenswrapper[4799]: I0216 14:10:43.187976 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74bd488478-wqpd6_f3bee5f6-a064-4641-9a90-de58c60eb3aa/keystone-api/0.log" Feb 16 14:10:43 crc kubenswrapper[4799]: I0216 14:10:43.285655 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tnk8z_c895c98f-f5b4-4f98-b498-fe07218cad2f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:43 crc kubenswrapper[4799]: I0216 14:10:43.749158 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f2pzh_fbfe848b-c120-4ca7-993f-47c1e3902ed1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:43 crc kubenswrapper[4799]: I0216 14:10:43.836693 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bd85f5c47-gbtmk_cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd/neutron-api/0.log" Feb 16 14:10:43 crc kubenswrapper[4799]: I0216 14:10:43.869251 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bd85f5c47-gbtmk_cb8e7f4b-74b3-4c75-83c0-d6af7bc8ffdd/neutron-httpd/0.log" Feb 16 14:10:44 crc kubenswrapper[4799]: I0216 14:10:44.616022 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ae5bc2f2-bb4d-4eb9-8f58-84edbff777f6/nova-cell0-conductor-conductor/0.log" Feb 16 14:10:44 crc kubenswrapper[4799]: I0216 14:10:44.869478 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_47764882-7881-4fbd-b682-c75a79736dea/nova-cell1-conductor-conductor/0.log" Feb 16 14:10:45 crc kubenswrapper[4799]: I0216 14:10:45.348710 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fa473e85-e345-4e62-b615-b9fc5b5ac754/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 14:10:45 crc kubenswrapper[4799]: I0216 14:10:45.376770 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-zr78d_9ecaed67-149c-4202-b3c9-c186d68a4b9a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:45 crc kubenswrapper[4799]: I0216 14:10:45.551640 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b1e557e-1e13-4d03-a4b9-fddccf7fc783/nova-api-log/0.log" Feb 16 14:10:45 crc kubenswrapper[4799]: I0216 14:10:45.704406 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_14e134b2-1c07-4a20-9bc6-ea4c75878094/nova-metadata-log/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.263759 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_06ddc5ff-d6d1-4997-8763-e97603e7df10/mysql-bootstrap/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.348658 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9b1e557e-1e13-4d03-a4b9-fddccf7fc783/nova-api-api/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.386000 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9e83b2fa-d9e9-4ed6-bc5f-8c119c219a53/nova-scheduler-scheduler/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.476618 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_06ddc5ff-d6d1-4997-8763-e97603e7df10/mysql-bootstrap/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.607083 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_06ddc5ff-d6d1-4997-8763-e97603e7df10/galera/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.735518 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_19d52513-0bac-433d-8167-3abd90820fff/mysql-bootstrap/0.log" Feb 16 14:10:46 crc kubenswrapper[4799]: I0216 14:10:46.898280 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_19d52513-0bac-433d-8167-3abd90820fff/galera/0.log" Feb 16 14:10:47 crc kubenswrapper[4799]: I0216 14:10:47.031008 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_19d52513-0bac-433d-8167-3abd90820fff/mysql-bootstrap/0.log" Feb 16 14:10:47 crc kubenswrapper[4799]: I0216 14:10:47.122494 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e024c88-16fc-4003-bc76-165ac4445e8f/openstackclient/0.log" Feb 16 14:10:47 crc kubenswrapper[4799]: I0216 14:10:47.466064 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cbnmk_c4e49631-ab2b-49a4-befb-ccc2df5a47c4/openstack-network-exporter/0.log" Feb 16 14:10:47 crc kubenswrapper[4799]: I0216 14:10:47.595581 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovsdb-server-init/0.log" Feb 16 14:10:47 crc kubenswrapper[4799]: I0216 14:10:47.780998 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovsdb-server/0.log" Feb 16 14:10:47 crc kubenswrapper[4799]: I0216 14:10:47.795108 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovsdb-server-init/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.051312 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wr6ph_d0a8e986-71a6-47cc-a34e-ddc323df4af4/ovn-controller/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.178564 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6rnj7_46a97d94-f787-4e62-86df-1ee58bdae9ce/ovs-vswitchd/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.243197 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_14e134b2-1c07-4a20-9bc6-ea4c75878094/nova-metadata-metadata/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.305170 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hpddx_e3f7c5d7-95f5-4b8b-9a17-99c4a179064e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.427581 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68382ea2-c66d-4ea6-be55-f77490a81898/openstack-network-exporter/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.501833 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68382ea2-c66d-4ea6-be55-f77490a81898/ovn-northd/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.614760 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7/openstack-network-exporter/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.709447 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2c8a7e69-a5da-4b7f-9ada-6ba2ceec88d7/ovsdbserver-nb/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.777722 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b93c98d8-9585-4406-8d4f-54ebdb84ee2d/openstack-network-exporter/0.log" Feb 16 14:10:48 crc kubenswrapper[4799]: I0216 14:10:48.847165 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b93c98d8-9585-4406-8d4f-54ebdb84ee2d/ovsdbserver-sb/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.158840 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/init-config-reloader/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.166438 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f58d8f5db-4k8dn_d2c303ca-c915-4f80-90b2-5e23882687b5/placement-api/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.319468 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f58d8f5db-4k8dn_d2c303ca-c915-4f80-90b2-5e23882687b5/placement-log/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.442509 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/config-reloader/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.468168 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/init-config-reloader/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.504024 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/prometheus/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.587605 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c10be81f-4b62-414a-bfec-3851332ecd48/thanos-sidecar/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.695306 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52adb145-1b05-4515-a214-83731e3504b4/setup-container/0.log" Feb 16 14:10:49 crc kubenswrapper[4799]: I0216 14:10:49.921768 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52adb145-1b05-4515-a214-83731e3504b4/setup-container/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.033148 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5b6ff320-8742-454a-9a6e-766db7e2c3a8/setup-container/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.059776 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52adb145-1b05-4515-a214-83731e3504b4/rabbitmq/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.255571 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5b6ff320-8742-454a-9a6e-766db7e2c3a8/setup-container/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.273931 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5b6ff320-8742-454a-9a6e-766db7e2c3a8/rabbitmq/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.300199 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a6be377-3c2d-46ab-a9b1-3faa91644a58/setup-container/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.561153 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a6be377-3c2d-46ab-a9b1-3faa91644a58/rabbitmq/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.586384 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a6be377-3c2d-46ab-a9b1-3faa91644a58/setup-container/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.632467 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v2558_cb5e39c0-c809-4971-a2ea-f2a01d9f4493/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.753350 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8xh9d_4e06d186-e0e8-4b62-8e6a-087d37dbd8c5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:50 crc kubenswrapper[4799]: I0216 14:10:50.889241 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-n9fk8_7cc337e4-c7f3-47cd-bd87-4d6230d8efcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.055666 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bgvxk_bfb29f60-f76e-40d0-b672-ae1be3eb5c84/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.124388 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2kzk7_b7657976-4772-4623-b14e-c9de2130efa5/ssh-known-hosts-edpm-deployment/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.349651 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f54946f5f-2jrb5_441c04e7-2794-48cf-bc03-4c13536d22c4/proxy-server/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.500591 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j6ghf_e330eb09-5b74-44cd-9812-1aaada5f979c/swift-ring-rebalance/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.540552 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f54946f5f-2jrb5_441c04e7-2794-48cf-bc03-4c13536d22c4/proxy-httpd/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.591235 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-auditor/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.710701 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-reaper/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.770865 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-auditor/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.792360 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.792411 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.802574 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-replicator/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.843879 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/account-server/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.966279 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-replicator/0.log" Feb 16 14:10:51 crc kubenswrapper[4799]: I0216 14:10:51.980870 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-updater/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.075389 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/container-server/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.138547 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-auditor/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.195046 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-expirer/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.245862 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-replicator/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.285893 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-server/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.561479 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/object-updater/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.608909 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/rsync/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.654735 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_95bfd980-54e7-4b29-a896-dc1cc52291fd/swift-recon-cron/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.834688 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-r6ch9_8ca97eaa-cb90-4bfe-9b2d-1a5a80d9fbf7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:52 crc kubenswrapper[4799]: I0216 14:10:52.908576 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c70f1fe2-3c0d-4fb1-a893-a2dbddec9afd/tempest-tests-tempest-tests-runner/0.log" Feb 16 14:10:53 crc kubenswrapper[4799]: I0216 14:10:53.065506 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ba576df3-d525-4b57-9913-4c2c86246682/test-operator-logs-container/0.log" Feb 16 14:10:53 crc kubenswrapper[4799]: I0216 14:10:53.205470 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jjr27_9dd7738f-7fe5-4522-94a5-afa6cf94a54d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 14:10:53 crc kubenswrapper[4799]: I0216 14:10:53.848502 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_9bd018cf-77c0-4f89-a1b7-e821440b0fe1/watcher-applier/0.log" Feb 16 14:10:54 crc kubenswrapper[4799]: I0216 14:10:54.402890 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9dddb140-3f08-4b16-97bf-be71806e7add/watcher-api-log/0.log" Feb 16 14:10:55 crc kubenswrapper[4799]: I0216 14:10:55.162655 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f68cb9f4-b04b-4b52-92e0-153239877a17/memcached/0.log" Feb 16 14:10:56 crc kubenswrapper[4799]: I0216 14:10:56.867633 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_a15e35f6-4998-4a70-9f95-272ba07a39ef/watcher-decision-engine/0.log" Feb 16 14:10:57 crc kubenswrapper[4799]: I0216 14:10:57.609223 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9dddb140-3f08-4b16-97bf-be71806e7add/watcher-api/0.log" Feb 16 14:11:21 crc kubenswrapper[4799]: I0216 14:11:21.793122 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:11:21 crc kubenswrapper[4799]: I0216 14:11:21.793892 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.281707 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/util/0.log" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.381352 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/util/0.log" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.678115 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/pull/0.log" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.721527 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/pull/0.log" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.854014 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/extract/0.log" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.860148 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/util/0.log" Feb 16 14:11:22 crc kubenswrapper[4799]: I0216 14:11:22.880449 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b7e09ac0fa12dcd58f182f6feea8f0bab244d10b25eada98c83c2d2e71qpwsx_c68693fd-4a9d-4ced-a924-278d18aca18f/pull/0.log" Feb 16 14:11:23 crc kubenswrapper[4799]: I0216 14:11:23.365315 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-ddwg6_5cc692f7-262b-4ffa-b259-69f665422e8d/manager/0.log" Feb 16 14:11:23 crc kubenswrapper[4799]: I0216 14:11:23.638666 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68c6d499cb-z9x44_c8106c68-2300-410d-94fc-5dc71651dba5/manager/0.log" Feb 16 14:11:23 crc kubenswrapper[4799]: I0216 14:11:23.856981 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-cq9hr_b286a989-7544-4596-bb1b-f06469aedbdc/manager/0.log" Feb 16 14:11:24 crc kubenswrapper[4799]: I0216 14:11:24.082861 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-m6t96_3278a4bc-c2fa-4672-9a31-f53b0e95dbcd/manager/0.log" Feb 16 14:11:24 crc kubenswrapper[4799]: I0216 14:11:24.583325 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-lwlqz_f7f2d9a8-7d6a-479a-8141-f0b77a5f7abf/manager/0.log" Feb 16 14:11:24 crc kubenswrapper[4799]: I0216 14:11:24.993931 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-gt66t_ae60b108-5e33-408f-a861-8e2e1e9ab643/manager/0.log" Feb 16 14:11:25 crc kubenswrapper[4799]: I0216 14:11:25.156801 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-686fx_9ec15942-7ca3-444c-a096-a23c21b701ed/manager/0.log" Feb 16 14:11:25 crc kubenswrapper[4799]: I0216 14:11:25.342893 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-96fff9cb8-jb5fm_fb144fe6-dbb4-492a-acb1-b642ea0a20f0/manager/0.log" Feb 16 14:11:25 crc kubenswrapper[4799]: I0216 14:11:25.594266 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-dqssm_1c684efb-e592-4c17-a896-897b466cd387/manager/0.log" Feb 16 14:11:25 crc kubenswrapper[4799]: I0216 14:11:25.616206 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-57746b5ff9-zh76r_b7dcb594-1126-4b75-8f5d-d2b5edc9ccad/manager/0.log" Feb 16 14:11:25 crc kubenswrapper[4799]: I0216 14:11:25.912089 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-g4fg8_8cdd0bfb-b4c4-4c37-9d3b-37b4f1607379/manager/0.log" Feb 16 14:11:25 crc kubenswrapper[4799]: I0216 14:11:25.993701 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-8r6qg_17536931-400e-4131-8992-a30c2ebda385/manager/0.log" Feb 16 14:11:26 crc kubenswrapper[4799]: I0216 14:11:26.364479 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-wd9l5_3469cc9e-8b93-4c52-957a-78b91019767d/manager/0.log" Feb 16 14:11:26 crc kubenswrapper[4799]: I0216 14:11:26.797012 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7678556f8f-7z95t_e414b45d-e5dd-4905-9f69-781ec6e6d824/operator/0.log" Feb 16 14:11:27 crc kubenswrapper[4799]: I0216 14:11:27.034064 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pvc2p_29da4bf2-657a-4d9d-b61b-788ef89d4b19/registry-server/0.log" Feb 16 14:11:27 crc kubenswrapper[4799]: I0216 14:11:27.323616 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-5trbx_12dbbffb-b10a-4b02-9698-fa66c5ff9451/manager/0.log" Feb 16 14:11:27 crc kubenswrapper[4799]: I0216 14:11:27.584593 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-rv7cl_1328d15a-4b40-4db9-b0f8-0c8490e623b9/manager/0.log" Feb 16 14:11:27 crc kubenswrapper[4799]: I0216 14:11:27.807807 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hrpbx_692956be-1d06-489c-9a30-0f7e4e144caa/operator/0.log" Feb 16 14:11:28 crc kubenswrapper[4799]: I0216 14:11:28.109927 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-6fhfw_bd478887-eb50-4e9c-8933-7b513c323cac/manager/0.log" Feb 16 14:11:28 crc kubenswrapper[4799]: I0216 14:11:28.685580 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-lz8sd_12e59839-c074-42ea-84e6-1be9b5a261ad/manager/0.log" Feb 16 14:11:28 crc kubenswrapper[4799]: I0216 14:11:28.847962 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-fhf99_7333b2fd-d81d-4daa-965a-3d5fefca8863/manager/0.log" Feb 16 14:11:29 crc kubenswrapper[4799]: I0216 14:11:29.028725 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-667bdd5bc9-lpnbm_1e501664-2258-45c7-8934-7f953c7fc799/manager/0.log" Feb 16 14:11:29 crc kubenswrapper[4799]: I0216 14:11:29.105111 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-4g8xm_ec674ea8-aa42-4917-906f-9a9b098ba2c0/manager/0.log" Feb 16 14:11:29 crc kubenswrapper[4799]: I0216 14:11:29.195000 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f65d44ccf-htwqf_0935892b-89a7-4b63-8012-dbe285c5a2f3/manager/0.log" Feb 16 14:11:34 crc kubenswrapper[4799]: I0216 14:11:34.829204 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-lzptd_e555e0d9-b9d6-4e25-ad40-c6d9c1cae800/manager/0.log" Feb 16 14:11:50 crc kubenswrapper[4799]: I0216 14:11:50.206861 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-swx86_a6d10e0e-6088-4be2-90a6-5ea568d7ce25/control-plane-machine-set-operator/0.log" Feb 16 14:11:50 crc kubenswrapper[4799]: I0216 14:11:50.378296 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lds8_12ef62d5-7675-44bf-a2e9-53093b004126/kube-rbac-proxy/0.log" Feb 16 14:11:50 crc kubenswrapper[4799]: I0216 14:11:50.394485 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lds8_12ef62d5-7675-44bf-a2e9-53093b004126/machine-api-operator/0.log" Feb 16 14:11:51 crc kubenswrapper[4799]: I0216 14:11:51.793526 4799 patch_prober.go:28] interesting pod/machine-config-daemon-6dl99 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:11:51 crc kubenswrapper[4799]: I0216 14:11:51.794003 4799 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:11:51 crc kubenswrapper[4799]: I0216 14:11:51.794077 4799 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" Feb 16 14:11:51 crc kubenswrapper[4799]: I0216 14:11:51.795304 4799 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724"} pod="openshift-machine-config-operator/machine-config-daemon-6dl99" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:11:51 crc kubenswrapper[4799]: I0216 14:11:51.795425 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerName="machine-config-daemon" containerID="cri-o://9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" gracePeriod=600 Feb 16 14:11:51 crc kubenswrapper[4799]: E0216 14:11:51.919225 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:11:52 crc kubenswrapper[4799]: I0216 14:11:52.457955 4799 generic.go:334] "Generic (PLEG): container finished" podID="e36db86c-3626-446f-8410-7e1f42ed16e1" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" exitCode=0 Feb 16 14:11:52 crc kubenswrapper[4799]: I0216 14:11:52.458008 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerDied","Data":"9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724"} Feb 16 14:11:52 crc kubenswrapper[4799]: I0216 14:11:52.458359 4799 scope.go:117] "RemoveContainer" containerID="f61a0149fc9439a26bb072a85fd3086e36ae51fb1d0c2377e8f6f1853e70763f" Feb 16 14:11:52 crc kubenswrapper[4799]: I0216 14:11:52.459700 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:11:52 crc kubenswrapper[4799]: E0216 14:11:52.460247 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:12:03 crc kubenswrapper[4799]: I0216 14:12:03.567780 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hcks5_4ce49784-a833-4d3a-8101-9618730dd5c7/cert-manager-controller/0.log" Feb 16 14:12:03 crc kubenswrapper[4799]: I0216 14:12:03.697833 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-kwbcb_d2d7275d-595b-44d8-afc7-8df5bb4b8e18/cert-manager-cainjector/0.log" Feb 16 14:12:03 crc kubenswrapper[4799]: I0216 14:12:03.772954 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-p9txt_75520423-f121-446d-8ad2-d0bfc440fd76/cert-manager-webhook/0.log" Feb 16 14:12:04 crc kubenswrapper[4799]: I0216 14:12:04.149569 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:12:04 crc kubenswrapper[4799]: E0216 14:12:04.149832 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:12:16 crc kubenswrapper[4799]: I0216 14:12:16.948746 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-x5r6j_dd3fb402-ea08-43d2-a79b-81e50caac303/nmstate-console-plugin/0.log" Feb 16 14:12:17 crc kubenswrapper[4799]: I0216 14:12:17.093800 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8zffw_cc1669bc-8a99-4bd8-979a-59d07b2cc876/nmstate-handler/0.log" Feb 16 14:12:17 crc kubenswrapper[4799]: I0216 14:12:17.133392 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-prbbx_3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd/kube-rbac-proxy/0.log" Feb 16 14:12:17 crc kubenswrapper[4799]: I0216 14:12:17.149852 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:12:17 crc kubenswrapper[4799]: E0216 14:12:17.150239 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:12:17 crc kubenswrapper[4799]: I0216 14:12:17.182362 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-prbbx_3c3bd5d3-e22f-49b9-b75c-69bd1d6324cd/nmstate-metrics/0.log" Feb 16 14:12:17 crc kubenswrapper[4799]: I0216 14:12:17.351523 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-v55q4_ea8a1c06-85d6-40e1-933d-163d4247f147/nmstate-webhook/0.log" Feb 16 14:12:17 crc kubenswrapper[4799]: I0216 14:12:17.371116 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-9fd4k_a83cd9e0-dc18-4f68-ac2f-cfbdf85e0660/nmstate-operator/0.log" Feb 16 14:12:31 crc kubenswrapper[4799]: I0216 14:12:31.212518 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-l48qr_ac6a624e-f6f1-44b4-b236-99307dfc75b3/prometheus-operator/0.log" Feb 16 14:12:31 crc kubenswrapper[4799]: I0216 14:12:31.384932 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr_956b64fb-674a-40a6-be9b-b249d5b03aab/prometheus-operator-admission-webhook/0.log" Feb 16 14:12:31 crc kubenswrapper[4799]: I0216 14:12:31.441751 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8_25240a98-4447-4af0-89d7-8868fed65af8/prometheus-operator-admission-webhook/0.log" Feb 16 14:12:31 crc kubenswrapper[4799]: I0216 14:12:31.618764 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9kr64_1f31c8ae-d209-4bed-8ed7-f568f713bd15/operator/0.log" Feb 16 14:12:31 crc kubenswrapper[4799]: I0216 14:12:31.637017 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fp4wv_ae279f38-d065-46a1-adb4-671588c18906/perses-operator/0.log" Feb 16 14:12:32 crc kubenswrapper[4799]: I0216 14:12:32.149358 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:12:32 crc kubenswrapper[4799]: E0216 14:12:32.149761 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:12:43 crc kubenswrapper[4799]: I0216 14:12:43.149447 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:12:43 crc kubenswrapper[4799]: E0216 14:12:43.150314 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:12:45 crc kubenswrapper[4799]: I0216 14:12:45.865782 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-4djwq_c54deb12-6083-4890-ab2d-20c5cede1547/kube-rbac-proxy/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.039284 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-4djwq_c54deb12-6083-4890-ab2d-20c5cede1547/controller/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.093685 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.243952 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.274479 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.279908 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.346055 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.546854 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.551168 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.570916 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.579617 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.738477 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-metrics/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.756693 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-reloader/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.756744 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/cp-frr-files/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.805363 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/controller/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.950588 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/frr-metrics/0.log" Feb 16 14:12:46 crc kubenswrapper[4799]: I0216 14:12:46.957143 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/kube-rbac-proxy/0.log" Feb 16 14:12:47 crc kubenswrapper[4799]: I0216 14:12:47.037724 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/kube-rbac-proxy-frr/0.log" Feb 16 14:12:47 crc kubenswrapper[4799]: I0216 14:12:47.166742 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/reloader/0.log" Feb 16 14:12:47 crc kubenswrapper[4799]: I0216 14:12:47.290374 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-qrqgr_4c963766-8661-4a44-8416-f0202f10fafb/frr-k8s-webhook-server/0.log" Feb 16 14:12:47 crc kubenswrapper[4799]: I0216 14:12:47.395075 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c7df86bbf-sjqnz_4af8dbaa-4279-4669-ac62-b78ae77d4063/manager/0.log" Feb 16 14:12:47 crc kubenswrapper[4799]: I0216 14:12:47.549576 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67d76b6b75-prfvg_11d39ab5-f7dc-4a0f-8746-5ec23ce4c7d3/webhook-server/0.log" Feb 16 14:12:47 crc kubenswrapper[4799]: I0216 14:12:47.671362 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jcvfs_00530bae-1878-49a9-876f-97b521db61cd/kube-rbac-proxy/0.log" Feb 16 14:12:48 crc kubenswrapper[4799]: I0216 14:12:48.275794 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jcvfs_00530bae-1878-49a9-876f-97b521db61cd/speaker/0.log" Feb 16 14:12:48 crc kubenswrapper[4799]: I0216 14:12:48.829861 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fmgnv_e20c8664-edbd-4e42-96e9-da19e197b232/frr/0.log" Feb 16 14:12:56 crc kubenswrapper[4799]: I0216 14:12:56.149164 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:12:56 crc kubenswrapper[4799]: E0216 14:12:56.149932 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.236553 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/util/0.log" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.432094 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/pull/0.log" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.696371 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/util/0.log" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.715932 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/pull/0.log" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.852149 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/util/0.log" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.862927 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/pull/0.log" Feb 16 14:13:02 crc kubenswrapper[4799]: I0216 14:13:02.919923 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mlz7s_b5433426-dfe0-4aa5-b5d6-f3bdadaf80aa/extract/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.040801 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/util/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.228488 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/util/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.237016 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/pull/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.252457 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/pull/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.427065 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/pull/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.453602 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/extract/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.461627 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213qkrqt_ffddb3c3-fb7b-447a-8b54-ae12f9488514/util/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.618578 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-utilities/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.774198 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-utilities/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.774242 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-content/0.log" Feb 16 14:13:03 crc kubenswrapper[4799]: I0216 14:13:03.826826 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-content/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.036391 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-utilities/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.051859 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/extract-content/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.268930 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-utilities/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.471904 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qsv5h_7cf8cac2-5686-40a2-91ee-86b8dc75db37/registry-server/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.479293 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-utilities/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.500683 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-content/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.554167 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-content/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.669458 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-utilities/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.726669 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/extract-content/0.log" Feb 16 14:13:04 crc kubenswrapper[4799]: I0216 14:13:04.912748 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/util/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.086749 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/pull/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.132878 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/util/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.140629 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82zmn_32789136-f921-4aee-9f3b-4f61c64cd97f/registry-server/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.174936 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/pull/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.309581 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/pull/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.333074 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/util/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.339544 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaw5blg_3b4d0f13-5b46-4300-bed6-54cf596bf6bd/extract/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.514070 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qb8p5_a8b56ef0-6df7-4a6a-a550-b0699ebaf909/marketplace-operator/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.542349 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-utilities/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.714575 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-utilities/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.718088 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-content/0.log" Feb 16 14:13:05 crc kubenswrapper[4799]: I0216 14:13:05.718896 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-content/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.131987 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-utilities/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.177948 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/extract-content/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.324077 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9t876_347ac568-46b1-4360-90fb-22d726ea9ab5/registry-server/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.364015 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-utilities/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.561051 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-utilities/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.582462 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-content/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.614366 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-content/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.768655 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-utilities/0.log" Feb 16 14:13:06 crc kubenswrapper[4799]: I0216 14:13:06.780792 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/extract-content/0.log" Feb 16 14:13:07 crc kubenswrapper[4799]: I0216 14:13:07.149888 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:13:07 crc kubenswrapper[4799]: E0216 14:13:07.150185 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:13:07 crc kubenswrapper[4799]: I0216 14:13:07.497102 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bl8v2_06fa4a8e-8c8a-4317-a695-7430ccad4dea/registry-server/0.log" Feb 16 14:13:21 crc kubenswrapper[4799]: I0216 14:13:21.144040 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-l48qr_ac6a624e-f6f1-44b4-b236-99307dfc75b3/prometheus-operator/0.log" Feb 16 14:13:21 crc kubenswrapper[4799]: I0216 14:13:21.200501 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-m54vr_956b64fb-674a-40a6-be9b-b249d5b03aab/prometheus-operator-admission-webhook/0.log" Feb 16 14:13:21 crc kubenswrapper[4799]: I0216 14:13:21.204509 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7dc588dd6b-wtcp8_25240a98-4447-4af0-89d7-8868fed65af8/prometheus-operator-admission-webhook/0.log" Feb 16 14:13:21 crc kubenswrapper[4799]: I0216 14:13:21.416795 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fp4wv_ae279f38-d065-46a1-adb4-671588c18906/perses-operator/0.log" Feb 16 14:13:21 crc kubenswrapper[4799]: I0216 14:13:21.420036 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9kr64_1f31c8ae-d209-4bed-8ed7-f568f713bd15/operator/0.log" Feb 16 14:13:22 crc kubenswrapper[4799]: I0216 14:13:22.149996 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:13:22 crc kubenswrapper[4799]: E0216 14:13:22.150524 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:13:29 crc kubenswrapper[4799]: E0216 14:13:29.932338 4799 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:52036->38.102.83.154:41287: write tcp 38.102.83.154:52036->38.102.83.154:41287: write: broken pipe Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.768034 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f8fr4"] Feb 16 14:13:32 crc kubenswrapper[4799]: E0216 14:13:32.769199 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717af918-3366-4638-b2ad-5735abfec78c" containerName="container-00" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.769218 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="717af918-3366-4638-b2ad-5735abfec78c" containerName="container-00" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.769469 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="717af918-3366-4638-b2ad-5735abfec78c" containerName="container-00" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.771348 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.778758 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8fr4"] Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.841957 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-utilities\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.842387 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-catalog-content\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.842508 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpkz\" (UniqueName: \"kubernetes.io/projected/92f77d94-f83c-40be-8eaa-664bcd560f8e-kube-api-access-bjpkz\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.944392 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-utilities\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.944562 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-catalog-content\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.944599 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjpkz\" (UniqueName: \"kubernetes.io/projected/92f77d94-f83c-40be-8eaa-664bcd560f8e-kube-api-access-bjpkz\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.945014 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-catalog-content\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.945016 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-utilities\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:32 crc kubenswrapper[4799]: I0216 14:13:32.969051 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjpkz\" (UniqueName: \"kubernetes.io/projected/92f77d94-f83c-40be-8eaa-664bcd560f8e-kube-api-access-bjpkz\") pod \"community-operators-f8fr4\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:33 crc kubenswrapper[4799]: I0216 14:13:33.087799 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:33 crc kubenswrapper[4799]: I0216 14:13:33.149186 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:13:33 crc kubenswrapper[4799]: E0216 14:13:33.149689 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:13:33 crc kubenswrapper[4799]: I0216 14:13:33.635803 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8fr4"] Feb 16 14:13:34 crc kubenswrapper[4799]: I0216 14:13:34.520876 4799 generic.go:334] "Generic (PLEG): container finished" podID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerID="fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9" exitCode=0 Feb 16 14:13:34 crc kubenswrapper[4799]: I0216 14:13:34.520944 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerDied","Data":"fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9"} Feb 16 14:13:34 crc kubenswrapper[4799]: I0216 14:13:34.521233 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerStarted","Data":"18c355834112ef5e76de92ad6ad0e7b1ce55b2da9afee9a47fff7cac4358fe2f"} Feb 16 14:13:34 crc kubenswrapper[4799]: I0216 14:13:34.522986 4799 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:13:35 crc kubenswrapper[4799]: I0216 14:13:35.533074 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerStarted","Data":"43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6"} Feb 16 14:13:37 crc kubenswrapper[4799]: I0216 14:13:37.552999 4799 generic.go:334] "Generic (PLEG): container finished" podID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerID="43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6" exitCode=0 Feb 16 14:13:37 crc kubenswrapper[4799]: I0216 14:13:37.553087 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerDied","Data":"43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6"} Feb 16 14:13:38 crc kubenswrapper[4799]: I0216 14:13:38.564471 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerStarted","Data":"aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c"} Feb 16 14:13:38 crc kubenswrapper[4799]: I0216 14:13:38.590835 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f8fr4" podStartSLOduration=3.136338071 podStartE2EDuration="6.590816542s" podCreationTimestamp="2026-02-16 14:13:32 +0000 UTC" firstStartedPulling="2026-02-16 14:13:34.52279453 +0000 UTC m=+6120.115809864" lastFinishedPulling="2026-02-16 14:13:37.977272991 +0000 UTC m=+6123.570288335" observedRunningTime="2026-02-16 14:13:38.581146356 +0000 UTC m=+6124.174161690" watchObservedRunningTime="2026-02-16 14:13:38.590816542 +0000 UTC m=+6124.183831876" Feb 16 14:13:42 crc kubenswrapper[4799]: E0216 14:13:42.836292 4799 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:54024->38.102.83.154:41287: write tcp 38.102.83.154:54024->38.102.83.154:41287: write: broken pipe Feb 16 14:13:43 crc kubenswrapper[4799]: I0216 14:13:43.087918 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:43 crc kubenswrapper[4799]: I0216 14:13:43.087975 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:43 crc kubenswrapper[4799]: I0216 14:13:43.148411 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:43 crc kubenswrapper[4799]: I0216 14:13:43.656350 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:43 crc kubenswrapper[4799]: I0216 14:13:43.709208 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8fr4"] Feb 16 14:13:44 crc kubenswrapper[4799]: I0216 14:13:44.149910 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:13:44 crc kubenswrapper[4799]: E0216 14:13:44.150197 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:13:45 crc kubenswrapper[4799]: I0216 14:13:45.628411 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f8fr4" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="registry-server" containerID="cri-o://aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c" gracePeriod=2 Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.201150 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.356493 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjpkz\" (UniqueName: \"kubernetes.io/projected/92f77d94-f83c-40be-8eaa-664bcd560f8e-kube-api-access-bjpkz\") pod \"92f77d94-f83c-40be-8eaa-664bcd560f8e\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.356670 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-catalog-content\") pod \"92f77d94-f83c-40be-8eaa-664bcd560f8e\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.356739 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-utilities\") pod \"92f77d94-f83c-40be-8eaa-664bcd560f8e\" (UID: \"92f77d94-f83c-40be-8eaa-664bcd560f8e\") " Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.361908 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-utilities" (OuterVolumeSpecName: "utilities") pod "92f77d94-f83c-40be-8eaa-664bcd560f8e" (UID: "92f77d94-f83c-40be-8eaa-664bcd560f8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.364101 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f77d94-f83c-40be-8eaa-664bcd560f8e-kube-api-access-bjpkz" (OuterVolumeSpecName: "kube-api-access-bjpkz") pod "92f77d94-f83c-40be-8eaa-664bcd560f8e" (UID: "92f77d94-f83c-40be-8eaa-664bcd560f8e"). InnerVolumeSpecName "kube-api-access-bjpkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.437689 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92f77d94-f83c-40be-8eaa-664bcd560f8e" (UID: "92f77d94-f83c-40be-8eaa-664bcd560f8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.465105 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjpkz\" (UniqueName: \"kubernetes.io/projected/92f77d94-f83c-40be-8eaa-664bcd560f8e-kube-api-access-bjpkz\") on node \"crc\" DevicePath \"\"" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.465171 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.465182 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f77d94-f83c-40be-8eaa-664bcd560f8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.640230 4799 generic.go:334] "Generic (PLEG): container finished" podID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerID="aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c" exitCode=0 Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.640276 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerDied","Data":"aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c"} Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.640305 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8fr4" event={"ID":"92f77d94-f83c-40be-8eaa-664bcd560f8e","Type":"ContainerDied","Data":"18c355834112ef5e76de92ad6ad0e7b1ce55b2da9afee9a47fff7cac4358fe2f"} Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.640325 4799 scope.go:117] "RemoveContainer" containerID="aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.640463 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8fr4" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.669631 4799 scope.go:117] "RemoveContainer" containerID="43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.693573 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8fr4"] Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.714808 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f8fr4"] Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.739444 4799 scope.go:117] "RemoveContainer" containerID="fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.769909 4799 scope.go:117] "RemoveContainer" containerID="aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c" Feb 16 14:13:46 crc kubenswrapper[4799]: E0216 14:13:46.770489 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c\": container with ID starting with aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c not found: ID does not exist" containerID="aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.770708 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c"} err="failed to get container status \"aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c\": rpc error: code = NotFound desc = could not find container \"aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c\": container with ID starting with aedbbfd4cda9a7f4f1b740b4084e398ff2bd340cf1a5839b09dd89aedca4e38c not found: ID does not exist" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.770735 4799 scope.go:117] "RemoveContainer" containerID="43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6" Feb 16 14:13:46 crc kubenswrapper[4799]: E0216 14:13:46.771250 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6\": container with ID starting with 43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6 not found: ID does not exist" containerID="43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.771290 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6"} err="failed to get container status \"43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6\": rpc error: code = NotFound desc = could not find container \"43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6\": container with ID starting with 43c043dc6292fcd48b664ba999798ce85fac92efe63168d9ef7328edf74e2cb6 not found: ID does not exist" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.771311 4799 scope.go:117] "RemoveContainer" containerID="fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9" Feb 16 14:13:46 crc kubenswrapper[4799]: E0216 14:13:46.771742 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9\": container with ID starting with fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9 not found: ID does not exist" containerID="fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9" Feb 16 14:13:46 crc kubenswrapper[4799]: I0216 14:13:46.771764 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9"} err="failed to get container status \"fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9\": rpc error: code = NotFound desc = could not find container \"fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9\": container with ID starting with fec7301fb785829b2dffcca3784db2ddce826a90dd4ddaacee2b23b7153738d9 not found: ID does not exist" Feb 16 14:13:47 crc kubenswrapper[4799]: I0216 14:13:47.164394 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" path="/var/lib/kubelet/pods/92f77d94-f83c-40be-8eaa-664bcd560f8e/volumes" Feb 16 14:13:58 crc kubenswrapper[4799]: I0216 14:13:58.150477 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:13:58 crc kubenswrapper[4799]: E0216 14:13:58.151684 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:14:13 crc kubenswrapper[4799]: I0216 14:14:13.149614 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:14:13 crc kubenswrapper[4799]: E0216 14:14:13.150962 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:14:28 crc kubenswrapper[4799]: I0216 14:14:28.149285 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:14:28 crc kubenswrapper[4799]: E0216 14:14:28.150461 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:14:39 crc kubenswrapper[4799]: I0216 14:14:39.150446 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:14:39 crc kubenswrapper[4799]: E0216 14:14:39.151413 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:14:50 crc kubenswrapper[4799]: I0216 14:14:50.150266 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:14:50 crc kubenswrapper[4799]: E0216 14:14:50.151217 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.174117 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz"] Feb 16 14:15:00 crc kubenswrapper[4799]: E0216 14:15:00.175440 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="extract-utilities" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.175466 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="extract-utilities" Feb 16 14:15:00 crc kubenswrapper[4799]: E0216 14:15:00.175488 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="registry-server" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.175496 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="registry-server" Feb 16 14:15:00 crc kubenswrapper[4799]: E0216 14:15:00.175532 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="extract-content" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.175540 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="extract-content" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.175769 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f77d94-f83c-40be-8eaa-664bcd560f8e" containerName="registry-server" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.176910 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.179971 4799 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.180402 4799 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.191071 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz"] Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.320471 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-config-volume\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.320856 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6d7\" (UniqueName: \"kubernetes.io/projected/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-kube-api-access-4w6d7\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.321020 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-secret-volume\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.422773 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-secret-volume\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.423158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-config-volume\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.423323 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6d7\" (UniqueName: \"kubernetes.io/projected/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-kube-api-access-4w6d7\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.424207 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-config-volume\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.440535 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-secret-volume\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.443800 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6d7\" (UniqueName: \"kubernetes.io/projected/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-kube-api-access-4w6d7\") pod \"collect-profiles-29520855-7d6lz\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:00 crc kubenswrapper[4799]: I0216 14:15:00.535239 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:01 crc kubenswrapper[4799]: I0216 14:15:01.002683 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz"] Feb 16 14:15:01 crc kubenswrapper[4799]: I0216 14:15:01.565750 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" event={"ID":"9d2afa06-9f0a-4cf4-9046-b4289ee708d8","Type":"ContainerStarted","Data":"43de1e6da7f312e1a76a5c66ad00d5bdf0b7bcd11c4b35c718fd7f227dc2b0f0"} Feb 16 14:15:01 crc kubenswrapper[4799]: I0216 14:15:01.565922 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" event={"ID":"9d2afa06-9f0a-4cf4-9046-b4289ee708d8","Type":"ContainerStarted","Data":"b5391e94ed0d0c33339ad403a8b1479e13ffc77c6eb8683f987355242172acb7"} Feb 16 14:15:01 crc kubenswrapper[4799]: I0216 14:15:01.582344 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" podStartSLOduration=1.582327479 podStartE2EDuration="1.582327479s" podCreationTimestamp="2026-02-16 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:15:01.578288263 +0000 UTC m=+6207.171303587" watchObservedRunningTime="2026-02-16 14:15:01.582327479 +0000 UTC m=+6207.175342813" Feb 16 14:15:02 crc kubenswrapper[4799]: I0216 14:15:02.581845 4799 generic.go:334] "Generic (PLEG): container finished" podID="9d2afa06-9f0a-4cf4-9046-b4289ee708d8" containerID="43de1e6da7f312e1a76a5c66ad00d5bdf0b7bcd11c4b35c718fd7f227dc2b0f0" exitCode=0 Feb 16 14:15:02 crc kubenswrapper[4799]: I0216 14:15:02.581917 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" event={"ID":"9d2afa06-9f0a-4cf4-9046-b4289ee708d8","Type":"ContainerDied","Data":"43de1e6da7f312e1a76a5c66ad00d5bdf0b7bcd11c4b35c718fd7f227dc2b0f0"} Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.109873 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.149067 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:15:04 crc kubenswrapper[4799]: E0216 14:15:04.149451 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.207722 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-secret-volume\") pod \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.208175 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w6d7\" (UniqueName: \"kubernetes.io/projected/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-kube-api-access-4w6d7\") pod \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.208261 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-config-volume\") pod \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\" (UID: \"9d2afa06-9f0a-4cf4-9046-b4289ee708d8\") " Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.209498 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d2afa06-9f0a-4cf4-9046-b4289ee708d8" (UID: "9d2afa06-9f0a-4cf4-9046-b4289ee708d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.210133 4799 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.216379 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d2afa06-9f0a-4cf4-9046-b4289ee708d8" (UID: "9d2afa06-9f0a-4cf4-9046-b4289ee708d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.219931 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-kube-api-access-4w6d7" (OuterVolumeSpecName: "kube-api-access-4w6d7") pod "9d2afa06-9f0a-4cf4-9046-b4289ee708d8" (UID: "9d2afa06-9f0a-4cf4-9046-b4289ee708d8"). InnerVolumeSpecName "kube-api-access-4w6d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.311922 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w6d7\" (UniqueName: \"kubernetes.io/projected/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-kube-api-access-4w6d7\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.311976 4799 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d2afa06-9f0a-4cf4-9046-b4289ee708d8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.606544 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" event={"ID":"9d2afa06-9f0a-4cf4-9046-b4289ee708d8","Type":"ContainerDied","Data":"b5391e94ed0d0c33339ad403a8b1479e13ffc77c6eb8683f987355242172acb7"} Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.606587 4799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5391e94ed0d0c33339ad403a8b1479e13ffc77c6eb8683f987355242172acb7" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.606639 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-7d6lz" Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.683377 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l"] Feb 16 14:15:04 crc kubenswrapper[4799]: I0216 14:15:04.695464 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-fwp8l"] Feb 16 14:15:05 crc kubenswrapper[4799]: I0216 14:15:05.186033 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36" path="/var/lib/kubelet/pods/3cb31fe2-3cdc-47ad-a432-0eb3b9ac1d36/volumes" Feb 16 14:15:15 crc kubenswrapper[4799]: I0216 14:15:15.164218 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:15:15 crc kubenswrapper[4799]: E0216 14:15:15.165179 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:15:25 crc kubenswrapper[4799]: I0216 14:15:25.851314 4799 generic.go:334] "Generic (PLEG): container finished" podID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerID="4f078e4cf11415feab6e5a4cb386eda1102186c5f564e063d2b6844a567b6220" exitCode=0 Feb 16 14:15:25 crc kubenswrapper[4799]: I0216 14:15:25.851440 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lppcm/must-gather-m8xqh" event={"ID":"17b52a93-fe09-496f-b253-1e84f1cbf8af","Type":"ContainerDied","Data":"4f078e4cf11415feab6e5a4cb386eda1102186c5f564e063d2b6844a567b6220"} Feb 16 14:15:25 crc kubenswrapper[4799]: I0216 14:15:25.853032 4799 scope.go:117] "RemoveContainer" containerID="4f078e4cf11415feab6e5a4cb386eda1102186c5f564e063d2b6844a567b6220" Feb 16 14:15:26 crc kubenswrapper[4799]: I0216 14:15:26.594109 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lppcm_must-gather-m8xqh_17b52a93-fe09-496f-b253-1e84f1cbf8af/gather/0.log" Feb 16 14:15:30 crc kubenswrapper[4799]: I0216 14:15:30.149999 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:15:30 crc kubenswrapper[4799]: E0216 14:15:30.151055 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:15:38 crc kubenswrapper[4799]: I0216 14:15:38.770933 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lppcm/must-gather-m8xqh"] Feb 16 14:15:38 crc kubenswrapper[4799]: I0216 14:15:38.771851 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lppcm/must-gather-m8xqh" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="copy" containerID="cri-o://4bd5f9234ba24cbd4f3c3cda9364b355a93c267f66ff15da2bb29c3e088db509" gracePeriod=2 Feb 16 14:15:38 crc kubenswrapper[4799]: I0216 14:15:38.781861 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lppcm/must-gather-m8xqh"] Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.014117 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lppcm_must-gather-m8xqh_17b52a93-fe09-496f-b253-1e84f1cbf8af/copy/0.log" Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.014865 4799 generic.go:334] "Generic (PLEG): container finished" podID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerID="4bd5f9234ba24cbd4f3c3cda9364b355a93c267f66ff15da2bb29c3e088db509" exitCode=143 Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.209687 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lppcm_must-gather-m8xqh_17b52a93-fe09-496f-b253-1e84f1cbf8af/copy/0.log" Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.210272 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.259638 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9hg\" (UniqueName: \"kubernetes.io/projected/17b52a93-fe09-496f-b253-1e84f1cbf8af-kube-api-access-5d9hg\") pod \"17b52a93-fe09-496f-b253-1e84f1cbf8af\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.259991 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17b52a93-fe09-496f-b253-1e84f1cbf8af-must-gather-output\") pod \"17b52a93-fe09-496f-b253-1e84f1cbf8af\" (UID: \"17b52a93-fe09-496f-b253-1e84f1cbf8af\") " Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.270467 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b52a93-fe09-496f-b253-1e84f1cbf8af-kube-api-access-5d9hg" (OuterVolumeSpecName: "kube-api-access-5d9hg") pod "17b52a93-fe09-496f-b253-1e84f1cbf8af" (UID: "17b52a93-fe09-496f-b253-1e84f1cbf8af"). InnerVolumeSpecName "kube-api-access-5d9hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.365487 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9hg\" (UniqueName: \"kubernetes.io/projected/17b52a93-fe09-496f-b253-1e84f1cbf8af-kube-api-access-5d9hg\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.465667 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b52a93-fe09-496f-b253-1e84f1cbf8af-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "17b52a93-fe09-496f-b253-1e84f1cbf8af" (UID: "17b52a93-fe09-496f-b253-1e84f1cbf8af"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:15:39 crc kubenswrapper[4799]: I0216 14:15:39.467002 4799 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/17b52a93-fe09-496f-b253-1e84f1cbf8af-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:40 crc kubenswrapper[4799]: I0216 14:15:40.025336 4799 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lppcm_must-gather-m8xqh_17b52a93-fe09-496f-b253-1e84f1cbf8af/copy/0.log" Feb 16 14:15:40 crc kubenswrapper[4799]: I0216 14:15:40.025752 4799 scope.go:117] "RemoveContainer" containerID="4bd5f9234ba24cbd4f3c3cda9364b355a93c267f66ff15da2bb29c3e088db509" Feb 16 14:15:40 crc kubenswrapper[4799]: I0216 14:15:40.025811 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lppcm/must-gather-m8xqh" Feb 16 14:15:40 crc kubenswrapper[4799]: I0216 14:15:40.046277 4799 scope.go:117] "RemoveContainer" containerID="4f078e4cf11415feab6e5a4cb386eda1102186c5f564e063d2b6844a567b6220" Feb 16 14:15:41 crc kubenswrapper[4799]: I0216 14:15:41.164427 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" path="/var/lib/kubelet/pods/17b52a93-fe09-496f-b253-1e84f1cbf8af/volumes" Feb 16 14:15:42 crc kubenswrapper[4799]: I0216 14:15:42.149324 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:15:42 crc kubenswrapper[4799]: E0216 14:15:42.149993 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:15:45 crc kubenswrapper[4799]: I0216 14:15:45.871348 4799 scope.go:117] "RemoveContainer" containerID="b6518a84bb93e0ed7dfb89d21a1fb86ee9fdea536a38d31e703faf8b32fe8186" Feb 16 14:15:53 crc kubenswrapper[4799]: I0216 14:15:53.149835 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:15:53 crc kubenswrapper[4799]: E0216 14:15:53.151664 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:16:05 crc kubenswrapper[4799]: I0216 14:16:05.159258 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:16:05 crc kubenswrapper[4799]: E0216 14:16:05.160636 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:16:20 crc kubenswrapper[4799]: I0216 14:16:20.149699 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:16:20 crc kubenswrapper[4799]: E0216 14:16:20.150755 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:16:34 crc kubenswrapper[4799]: I0216 14:16:34.150046 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:16:34 crc kubenswrapper[4799]: E0216 14:16:34.151468 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:16:48 crc kubenswrapper[4799]: I0216 14:16:48.150503 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:16:48 crc kubenswrapper[4799]: E0216 14:16:48.151939 4799 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-6dl99_openshift-machine-config-operator(e36db86c-3626-446f-8410-7e1f42ed16e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" podUID="e36db86c-3626-446f-8410-7e1f42ed16e1" Feb 16 14:17:00 crc kubenswrapper[4799]: I0216 14:17:00.149521 4799 scope.go:117] "RemoveContainer" containerID="9ad2e04f7078e0b4ce2353dc7c667b945dc6a47c8144c73e8b7e131f67294724" Feb 16 14:17:00 crc kubenswrapper[4799]: I0216 14:17:00.993977 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6dl99" event={"ID":"e36db86c-3626-446f-8410-7e1f42ed16e1","Type":"ContainerStarted","Data":"241102e8961240986dff65c5c967dd9dd9aad49be1ae969ad4e1716167dcce1c"} Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.136054 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rgwrw"] Feb 16 14:17:46 crc kubenswrapper[4799]: E0216 14:17:46.138542 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2afa06-9f0a-4cf4-9046-b4289ee708d8" containerName="collect-profiles" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.138648 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2afa06-9f0a-4cf4-9046-b4289ee708d8" containerName="collect-profiles" Feb 16 14:17:46 crc kubenswrapper[4799]: E0216 14:17:46.138745 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="gather" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.138812 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="gather" Feb 16 14:17:46 crc kubenswrapper[4799]: E0216 14:17:46.138896 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="copy" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.138958 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="copy" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.139435 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="gather" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.139536 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2afa06-9f0a-4cf4-9046-b4289ee708d8" containerName="collect-profiles" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.139621 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b52a93-fe09-496f-b253-1e84f1cbf8af" containerName="copy" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.141896 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.155963 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgwrw"] Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.237758 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhg7n\" (UniqueName: \"kubernetes.io/projected/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-kube-api-access-mhg7n\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.238185 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-catalog-content\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.238395 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-utilities\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.341077 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhg7n\" (UniqueName: \"kubernetes.io/projected/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-kube-api-access-mhg7n\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.341158 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-catalog-content\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.341267 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-utilities\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.341647 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-catalog-content\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.341716 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-utilities\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.363937 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhg7n\" (UniqueName: \"kubernetes.io/projected/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-kube-api-access-mhg7n\") pod \"certified-operators-rgwrw\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:46 crc kubenswrapper[4799]: I0216 14:17:46.498271 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:47 crc kubenswrapper[4799]: I0216 14:17:47.038649 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgwrw"] Feb 16 14:17:47 crc kubenswrapper[4799]: I0216 14:17:47.527850 4799 generic.go:334] "Generic (PLEG): container finished" podID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerID="c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23" exitCode=0 Feb 16 14:17:47 crc kubenswrapper[4799]: I0216 14:17:47.527919 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerDied","Data":"c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23"} Feb 16 14:17:47 crc kubenswrapper[4799]: I0216 14:17:47.528958 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerStarted","Data":"6442c08ce67550bbcd475d2262a188fa0c4a90ec0420d50ae1fbd9d033ef8f5d"} Feb 16 14:17:49 crc kubenswrapper[4799]: I0216 14:17:49.558286 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerStarted","Data":"fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816"} Feb 16 14:17:51 crc kubenswrapper[4799]: I0216 14:17:51.582730 4799 generic.go:334] "Generic (PLEG): container finished" podID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerID="fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816" exitCode=0 Feb 16 14:17:51 crc kubenswrapper[4799]: I0216 14:17:51.582833 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerDied","Data":"fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816"} Feb 16 14:17:52 crc kubenswrapper[4799]: I0216 14:17:52.595106 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerStarted","Data":"88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d"} Feb 16 14:17:52 crc kubenswrapper[4799]: I0216 14:17:52.617261 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rgwrw" podStartSLOduration=2.012520132 podStartE2EDuration="6.61724279s" podCreationTimestamp="2026-02-16 14:17:46 +0000 UTC" firstStartedPulling="2026-02-16 14:17:47.530034556 +0000 UTC m=+6373.123049890" lastFinishedPulling="2026-02-16 14:17:52.134757184 +0000 UTC m=+6377.727772548" observedRunningTime="2026-02-16 14:17:52.612663019 +0000 UTC m=+6378.205678363" watchObservedRunningTime="2026-02-16 14:17:52.61724279 +0000 UTC m=+6378.210258134" Feb 16 14:17:56 crc kubenswrapper[4799]: I0216 14:17:56.498821 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:56 crc kubenswrapper[4799]: I0216 14:17:56.499636 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:17:56 crc kubenswrapper[4799]: I0216 14:17:56.563149 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:18:06 crc kubenswrapper[4799]: I0216 14:18:06.563218 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:18:06 crc kubenswrapper[4799]: I0216 14:18:06.656497 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgwrw"] Feb 16 14:18:06 crc kubenswrapper[4799]: I0216 14:18:06.769887 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rgwrw" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="registry-server" containerID="cri-o://88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d" gracePeriod=2 Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.289771 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.370974 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhg7n\" (UniqueName: \"kubernetes.io/projected/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-kube-api-access-mhg7n\") pod \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.371035 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-utilities\") pod \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.371105 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-catalog-content\") pod \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\" (UID: \"112c7aeb-ce95-4098-8a6f-fb2eca87abf2\") " Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.372299 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-utilities" (OuterVolumeSpecName: "utilities") pod "112c7aeb-ce95-4098-8a6f-fb2eca87abf2" (UID: "112c7aeb-ce95-4098-8a6f-fb2eca87abf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.382374 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-kube-api-access-mhg7n" (OuterVolumeSpecName: "kube-api-access-mhg7n") pod "112c7aeb-ce95-4098-8a6f-fb2eca87abf2" (UID: "112c7aeb-ce95-4098-8a6f-fb2eca87abf2"). InnerVolumeSpecName "kube-api-access-mhg7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.479383 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhg7n\" (UniqueName: \"kubernetes.io/projected/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-kube-api-access-mhg7n\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.479436 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.480835 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "112c7aeb-ce95-4098-8a6f-fb2eca87abf2" (UID: "112c7aeb-ce95-4098-8a6f-fb2eca87abf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.581283 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112c7aeb-ce95-4098-8a6f-fb2eca87abf2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.780944 4799 generic.go:334] "Generic (PLEG): container finished" podID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerID="88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d" exitCode=0 Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.781015 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgwrw" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.781029 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerDied","Data":"88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d"} Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.781512 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgwrw" event={"ID":"112c7aeb-ce95-4098-8a6f-fb2eca87abf2","Type":"ContainerDied","Data":"6442c08ce67550bbcd475d2262a188fa0c4a90ec0420d50ae1fbd9d033ef8f5d"} Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.781538 4799 scope.go:117] "RemoveContainer" containerID="88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.819232 4799 scope.go:117] "RemoveContainer" containerID="fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.824499 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgwrw"] Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.835450 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rgwrw"] Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.861294 4799 scope.go:117] "RemoveContainer" containerID="c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.898452 4799 scope.go:117] "RemoveContainer" containerID="88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d" Feb 16 14:18:07 crc kubenswrapper[4799]: E0216 14:18:07.899010 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d\": container with ID starting with 88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d not found: ID does not exist" containerID="88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.899046 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d"} err="failed to get container status \"88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d\": rpc error: code = NotFound desc = could not find container \"88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d\": container with ID starting with 88662c34e971c1df5f81d3810c0bbe7b8ebd7966ddc0ff0f26869204207cd22d not found: ID does not exist" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.899068 4799 scope.go:117] "RemoveContainer" containerID="fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816" Feb 16 14:18:07 crc kubenswrapper[4799]: E0216 14:18:07.899775 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816\": container with ID starting with fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816 not found: ID does not exist" containerID="fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.899814 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816"} err="failed to get container status \"fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816\": rpc error: code = NotFound desc = could not find container \"fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816\": container with ID starting with fb7b177ba25589893043b8381fe5aaca995ef3cd04ec3b44d547f4db8ec0a816 not found: ID does not exist" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.899842 4799 scope.go:117] "RemoveContainer" containerID="c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23" Feb 16 14:18:07 crc kubenswrapper[4799]: E0216 14:18:07.900339 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23\": container with ID starting with c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23 not found: ID does not exist" containerID="c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23" Feb 16 14:18:07 crc kubenswrapper[4799]: I0216 14:18:07.900365 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23"} err="failed to get container status \"c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23\": rpc error: code = NotFound desc = could not find container \"c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23\": container with ID starting with c0c3115ba89f7e127f8cf44d6236761d713a1afa451534bd2c56ded0c522bd23 not found: ID does not exist" Feb 16 14:18:09 crc kubenswrapper[4799]: I0216 14:18:09.169582 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" path="/var/lib/kubelet/pods/112c7aeb-ce95-4098-8a6f-fb2eca87abf2/volumes" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.392422 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8bn5"] Feb 16 14:18:18 crc kubenswrapper[4799]: E0216 14:18:18.393849 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="registry-server" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.393880 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="registry-server" Feb 16 14:18:18 crc kubenswrapper[4799]: E0216 14:18:18.393910 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="extract-utilities" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.393927 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="extract-utilities" Feb 16 14:18:18 crc kubenswrapper[4799]: E0216 14:18:18.393955 4799 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="extract-content" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.393971 4799 state_mem.go:107] "Deleted CPUSet assignment" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="extract-content" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.394547 4799 memory_manager.go:354] "RemoveStaleState removing state" podUID="112c7aeb-ce95-4098-8a6f-fb2eca87abf2" containerName="registry-server" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.401287 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.412458 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8bn5"] Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.592782 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-utilities\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.593114 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-catalog-content\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.593147 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbqp\" (UniqueName: \"kubernetes.io/projected/18ee2e98-8c03-4256-9115-72ff8670625a-kube-api-access-5cbqp\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.695271 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-utilities\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.695334 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbqp\" (UniqueName: \"kubernetes.io/projected/18ee2e98-8c03-4256-9115-72ff8670625a-kube-api-access-5cbqp\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.695358 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-catalog-content\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.695923 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-utilities\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.696250 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-catalog-content\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.717201 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbqp\" (UniqueName: \"kubernetes.io/projected/18ee2e98-8c03-4256-9115-72ff8670625a-kube-api-access-5cbqp\") pod \"redhat-operators-s8bn5\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:18 crc kubenswrapper[4799]: I0216 14:18:18.781241 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:19 crc kubenswrapper[4799]: I0216 14:18:19.277594 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8bn5"] Feb 16 14:18:19 crc kubenswrapper[4799]: I0216 14:18:19.930698 4799 generic.go:334] "Generic (PLEG): container finished" podID="18ee2e98-8c03-4256-9115-72ff8670625a" containerID="f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256" exitCode=0 Feb 16 14:18:19 crc kubenswrapper[4799]: I0216 14:18:19.930805 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerDied","Data":"f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256"} Feb 16 14:18:19 crc kubenswrapper[4799]: I0216 14:18:19.930919 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerStarted","Data":"ea9266c6677ec862713eaf8a20e07cd4c9536b2cf0bba66f80f359b6cbc38ea5"} Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.585715 4799 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rcjqn"] Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.588678 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.617524 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcjqn"] Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.737995 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-catalog-content\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.738173 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6v4\" (UniqueName: \"kubernetes.io/projected/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-kube-api-access-4s6v4\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.738296 4799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-utilities\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.840255 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6v4\" (UniqueName: \"kubernetes.io/projected/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-kube-api-access-4s6v4\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.840389 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-utilities\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.840507 4799 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-catalog-content\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.841024 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-utilities\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.841111 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-catalog-content\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.863961 4799 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6v4\" (UniqueName: \"kubernetes.io/projected/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-kube-api-access-4s6v4\") pod \"redhat-marketplace-rcjqn\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:20 crc kubenswrapper[4799]: I0216 14:18:20.910913 4799 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:21 crc kubenswrapper[4799]: I0216 14:18:21.451990 4799 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcjqn"] Feb 16 14:18:21 crc kubenswrapper[4799]: I0216 14:18:21.953775 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerStarted","Data":"d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10"} Feb 16 14:18:21 crc kubenswrapper[4799]: I0216 14:18:21.955893 4799 generic.go:334] "Generic (PLEG): container finished" podID="cfdfc4a1-7acf-4f63-8363-00c24d9963ac" containerID="48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014" exitCode=0 Feb 16 14:18:21 crc kubenswrapper[4799]: I0216 14:18:21.955939 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerDied","Data":"48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014"} Feb 16 14:18:21 crc kubenswrapper[4799]: I0216 14:18:21.955965 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerStarted","Data":"9ca385769773f57085f6140c87f1395a1165aabf70815897f456f938e48c7b52"} Feb 16 14:18:22 crc kubenswrapper[4799]: I0216 14:18:22.967685 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerStarted","Data":"bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc"} Feb 16 14:18:23 crc kubenswrapper[4799]: I0216 14:18:23.995041 4799 generic.go:334] "Generic (PLEG): container finished" podID="cfdfc4a1-7acf-4f63-8363-00c24d9963ac" containerID="bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc" exitCode=0 Feb 16 14:18:23 crc kubenswrapper[4799]: I0216 14:18:23.995185 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerDied","Data":"bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc"} Feb 16 14:18:29 crc kubenswrapper[4799]: I0216 14:18:29.061507 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerStarted","Data":"6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2"} Feb 16 14:18:29 crc kubenswrapper[4799]: I0216 14:18:29.087165 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rcjqn" podStartSLOduration=2.9594261509999997 podStartE2EDuration="9.087146857s" podCreationTimestamp="2026-02-16 14:18:20 +0000 UTC" firstStartedPulling="2026-02-16 14:18:21.957949154 +0000 UTC m=+6407.550964488" lastFinishedPulling="2026-02-16 14:18:28.08566986 +0000 UTC m=+6413.678685194" observedRunningTime="2026-02-16 14:18:29.077911604 +0000 UTC m=+6414.670926938" watchObservedRunningTime="2026-02-16 14:18:29.087146857 +0000 UTC m=+6414.680162191" Feb 16 14:18:30 crc kubenswrapper[4799]: I0216 14:18:30.100478 4799 generic.go:334] "Generic (PLEG): container finished" podID="18ee2e98-8c03-4256-9115-72ff8670625a" containerID="d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10" exitCode=0 Feb 16 14:18:30 crc kubenswrapper[4799]: I0216 14:18:30.101657 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerDied","Data":"d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10"} Feb 16 14:18:30 crc kubenswrapper[4799]: I0216 14:18:30.911582 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:30 crc kubenswrapper[4799]: I0216 14:18:30.911998 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:31 crc kubenswrapper[4799]: I0216 14:18:31.974525 4799 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rcjqn" podUID="cfdfc4a1-7acf-4f63-8363-00c24d9963ac" containerName="registry-server" probeResult="failure" output=< Feb 16 14:18:31 crc kubenswrapper[4799]: timeout: failed to connect service ":50051" within 1s Feb 16 14:18:31 crc kubenswrapper[4799]: > Feb 16 14:18:32 crc kubenswrapper[4799]: I0216 14:18:32.119050 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerStarted","Data":"eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7"} Feb 16 14:18:32 crc kubenswrapper[4799]: I0216 14:18:32.141239 4799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8bn5" podStartSLOduration=2.678698061 podStartE2EDuration="14.141217736s" podCreationTimestamp="2026-02-16 14:18:18 +0000 UTC" firstStartedPulling="2026-02-16 14:18:19.932787698 +0000 UTC m=+6405.525803032" lastFinishedPulling="2026-02-16 14:18:31.395307373 +0000 UTC m=+6416.988322707" observedRunningTime="2026-02-16 14:18:32.13401767 +0000 UTC m=+6417.727033024" watchObservedRunningTime="2026-02-16 14:18:32.141217736 +0000 UTC m=+6417.734233070" Feb 16 14:18:38 crc kubenswrapper[4799]: I0216 14:18:38.781777 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:38 crc kubenswrapper[4799]: I0216 14:18:38.783468 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:38 crc kubenswrapper[4799]: I0216 14:18:38.856208 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:39 crc kubenswrapper[4799]: I0216 14:18:39.256661 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:39 crc kubenswrapper[4799]: I0216 14:18:39.330141 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8bn5"] Feb 16 14:18:40 crc kubenswrapper[4799]: I0216 14:18:40.979649 4799 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.044654 4799 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.205390 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8bn5" podUID="18ee2e98-8c03-4256-9115-72ff8670625a" containerName="registry-server" containerID="cri-o://eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7" gracePeriod=2 Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.502517 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcjqn"] Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.706568 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.883245 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cbqp\" (UniqueName: \"kubernetes.io/projected/18ee2e98-8c03-4256-9115-72ff8670625a-kube-api-access-5cbqp\") pod \"18ee2e98-8c03-4256-9115-72ff8670625a\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.883347 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-catalog-content\") pod \"18ee2e98-8c03-4256-9115-72ff8670625a\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.883392 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-utilities\") pod \"18ee2e98-8c03-4256-9115-72ff8670625a\" (UID: \"18ee2e98-8c03-4256-9115-72ff8670625a\") " Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.891265 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-utilities" (OuterVolumeSpecName: "utilities") pod "18ee2e98-8c03-4256-9115-72ff8670625a" (UID: "18ee2e98-8c03-4256-9115-72ff8670625a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.896956 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ee2e98-8c03-4256-9115-72ff8670625a-kube-api-access-5cbqp" (OuterVolumeSpecName: "kube-api-access-5cbqp") pod "18ee2e98-8c03-4256-9115-72ff8670625a" (UID: "18ee2e98-8c03-4256-9115-72ff8670625a"). InnerVolumeSpecName "kube-api-access-5cbqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.986095 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:41 crc kubenswrapper[4799]: I0216 14:18:41.987216 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cbqp\" (UniqueName: \"kubernetes.io/projected/18ee2e98-8c03-4256-9115-72ff8670625a-kube-api-access-5cbqp\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.041056 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18ee2e98-8c03-4256-9115-72ff8670625a" (UID: "18ee2e98-8c03-4256-9115-72ff8670625a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.089078 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ee2e98-8c03-4256-9115-72ff8670625a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.219385 4799 generic.go:334] "Generic (PLEG): container finished" podID="18ee2e98-8c03-4256-9115-72ff8670625a" containerID="eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7" exitCode=0 Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.219507 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8bn5" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.219495 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerDied","Data":"eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7"} Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.220387 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8bn5" event={"ID":"18ee2e98-8c03-4256-9115-72ff8670625a","Type":"ContainerDied","Data":"ea9266c6677ec862713eaf8a20e07cd4c9536b2cf0bba66f80f359b6cbc38ea5"} Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.220447 4799 scope.go:117] "RemoveContainer" containerID="eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.222850 4799 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rcjqn" podUID="cfdfc4a1-7acf-4f63-8363-00c24d9963ac" containerName="registry-server" containerID="cri-o://6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2" gracePeriod=2 Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.251846 4799 scope.go:117] "RemoveContainer" containerID="d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.272676 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8bn5"] Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.279417 4799 scope.go:117] "RemoveContainer" containerID="f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.283089 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8bn5"] Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.441986 4799 scope.go:117] "RemoveContainer" containerID="eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7" Feb 16 14:18:42 crc kubenswrapper[4799]: E0216 14:18:42.442888 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7\": container with ID starting with eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7 not found: ID does not exist" containerID="eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.443030 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7"} err="failed to get container status \"eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7\": rpc error: code = NotFound desc = could not find container \"eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7\": container with ID starting with eda62c6e6e9c63c90931c98e4c4822a733090ba69e86737094d0d80f715f35f7 not found: ID does not exist" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.443157 4799 scope.go:117] "RemoveContainer" containerID="d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10" Feb 16 14:18:42 crc kubenswrapper[4799]: E0216 14:18:42.443603 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10\": container with ID starting with d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10 not found: ID does not exist" containerID="d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.443723 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10"} err="failed to get container status \"d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10\": rpc error: code = NotFound desc = could not find container \"d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10\": container with ID starting with d55f7fa379af156f7b7e96c8b4b033d13dd71804737b1ac740636752e8c5fa10 not found: ID does not exist" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.443814 4799 scope.go:117] "RemoveContainer" containerID="f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256" Feb 16 14:18:42 crc kubenswrapper[4799]: E0216 14:18:42.444219 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256\": container with ID starting with f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256 not found: ID does not exist" containerID="f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.444259 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256"} err="failed to get container status \"f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256\": rpc error: code = NotFound desc = could not find container \"f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256\": container with ID starting with f14ba76638fccd92394b92e422f4a05546c456d2af2c4bcbf9be16d2e49ed256 not found: ID does not exist" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.706571 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.908047 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-utilities\") pod \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.908665 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s6v4\" (UniqueName: \"kubernetes.io/projected/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-kube-api-access-4s6v4\") pod \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.908732 4799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-catalog-content\") pod \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\" (UID: \"cfdfc4a1-7acf-4f63-8363-00c24d9963ac\") " Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.908984 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-utilities" (OuterVolumeSpecName: "utilities") pod "cfdfc4a1-7acf-4f63-8363-00c24d9963ac" (UID: "cfdfc4a1-7acf-4f63-8363-00c24d9963ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.910020 4799 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.916258 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-kube-api-access-4s6v4" (OuterVolumeSpecName: "kube-api-access-4s6v4") pod "cfdfc4a1-7acf-4f63-8363-00c24d9963ac" (UID: "cfdfc4a1-7acf-4f63-8363-00c24d9963ac"). InnerVolumeSpecName "kube-api-access-4s6v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:18:42 crc kubenswrapper[4799]: I0216 14:18:42.947697 4799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfdfc4a1-7acf-4f63-8363-00c24d9963ac" (UID: "cfdfc4a1-7acf-4f63-8363-00c24d9963ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.011340 4799 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s6v4\" (UniqueName: \"kubernetes.io/projected/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-kube-api-access-4s6v4\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.011371 4799 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfdfc4a1-7acf-4f63-8363-00c24d9963ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.161001 4799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ee2e98-8c03-4256-9115-72ff8670625a" path="/var/lib/kubelet/pods/18ee2e98-8c03-4256-9115-72ff8670625a/volumes" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.232751 4799 generic.go:334] "Generic (PLEG): container finished" podID="cfdfc4a1-7acf-4f63-8363-00c24d9963ac" containerID="6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2" exitCode=0 Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.232803 4799 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcjqn" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.232838 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerDied","Data":"6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2"} Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.232865 4799 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcjqn" event={"ID":"cfdfc4a1-7acf-4f63-8363-00c24d9963ac","Type":"ContainerDied","Data":"9ca385769773f57085f6140c87f1395a1165aabf70815897f456f938e48c7b52"} Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.232880 4799 scope.go:117] "RemoveContainer" containerID="6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.264448 4799 scope.go:117] "RemoveContainer" containerID="bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.265427 4799 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcjqn"] Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.273467 4799 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcjqn"] Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.286773 4799 scope.go:117] "RemoveContainer" containerID="48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.306789 4799 scope.go:117] "RemoveContainer" containerID="6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2" Feb 16 14:18:43 crc kubenswrapper[4799]: E0216 14:18:43.307318 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2\": container with ID starting with 6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2 not found: ID does not exist" containerID="6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.307356 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2"} err="failed to get container status \"6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2\": rpc error: code = NotFound desc = could not find container \"6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2\": container with ID starting with 6282e9667979e31f6b9430f8d3176fd892af2deabc9ea5474bb8bd7776bf85b2 not found: ID does not exist" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.307384 4799 scope.go:117] "RemoveContainer" containerID="bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc" Feb 16 14:18:43 crc kubenswrapper[4799]: E0216 14:18:43.307729 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc\": container with ID starting with bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc not found: ID does not exist" containerID="bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.307755 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc"} err="failed to get container status \"bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc\": rpc error: code = NotFound desc = could not find container \"bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc\": container with ID starting with bda5c8a9616c695fb4b1de60e49e97a45182e03527d5112adf3f8d9672021ddc not found: ID does not exist" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.307773 4799 scope.go:117] "RemoveContainer" containerID="48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014" Feb 16 14:18:43 crc kubenswrapper[4799]: E0216 14:18:43.308038 4799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014\": container with ID starting with 48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014 not found: ID does not exist" containerID="48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014" Feb 16 14:18:43 crc kubenswrapper[4799]: I0216 14:18:43.308068 4799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014"} err="failed to get container status \"48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014\": rpc error: code = NotFound desc = could not find container \"48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014\": container with ID starting with 48f22e0a800e627a405d5e74e3f0eded06f6d42e7eda306d69d7780469f40014 not found: ID does not exist"